
From People Also Ask to AI Search — Reading the Signals That Matter | Mark Williams-Cook
Welcome back to The Search Session podcast! I’m Gianluca Fiorelli, and in this new episode, I’m joined by Mark Williams-Cook—SEO expert and digital strategist, and founder of AlsoAsked.com.
Mark has spent years working in search marketing, combining a clear, analytical approach with a talent for noticing trends before most people spot them.
Here are some of the most interesting things we talked about in this episode:
Communicating SEO changes with video – Mark creates short videos to explain AI search developments to clients, making concepts easier to grasp and avoiding falling for “SEO is dead” hype.
AI Search isn’t entirely new – while AI changes how we measure and prioritize SEO efforts, the core practices—technical health, quality content, and coverage—remain the same.
People Also Ask & Query Fan-Out – used strategically, these reveal user intent paths and help build better topical coverage, rather than churning out low-value content.
Mark’s Google exploit insights – the site quality score showed that failing to meet certain quality thresholds prevents ranking, regardless of content quality.
User signals and trust data – engagement patterns, clicks, and link graphs remain key signals in both traditional and AI search.
Get ready to discover new helpful ideas for your SEO strategy—or be reminded why some classic SEO tactics still hold their value.
Video Chapters
Transcript
Gianluca Fiorelli: Hi, I’m Gianluca Fiorelli, and welcome back to The Search Session. Today we’re talking with someone very well-known in our SEO search industry — you probably know him for AlsoAsked.com, one of the most widely used tools by all of us.
Our guest is also the Digital Marketing Director at Candour, a UK-based agency in Norwich. He’s well-known for being quite vocal on LinkedIn especially, sharing clever “zero-click marketing” insights as his famous Daily Unsolicited SEO Tips.
He also publishes a fantastic newsletter, which I highly recommend subscribing to: Core Updates SEO. And, as I recently learned while listening to him at the last BrightonSEO, he’s a big fan of video games. So, let’s welcome Mark Williams-Cook. Hi Mark, how are you?
Mark Williams-Cook: Hello! Very well, thank you for such a kind and generous introduction. It’s a pleasure to sit down with you, and I’m looking forward to having, hopefully, a fun chat about SEO.
Gianluca Fiorelli: I’m sure it will be fun — and not only fun. From our conversation, and especially thanks to you, I’m sure we’ll all learn something new, or at least gain a different perspective on things we think we already know.
How SEO Is Treating Mark Lately
Gianluca Fiorelli: As I always ask my guests — my SEO guests — let’s start with my classic opening question: How’s SEO treating you lately?
Mark Williams-Cook: Pretty good, to be honest. There’s obviously a lot going on. The agency, as you said—Candour—is still very busy, which is great.
AlsoAsked, which you kindly mentioned, it is probably a bit busier than usual right now. We’re getting more new customers, and I think that’s partly because of the ongoing discussions around AI search. It feels like the “other half” of SEO has woken up to the idea that it’s not just about keywords — especially with all the query fan-out stuff happening as well.
And I guess for me personally, there are two things.
First, I honestly feel like my head is very full at the moment. My subconscious is busy processing all this new information, trying to figure out the best next steps for the medium- and long-term future. That’s actually really exciting — I don’t think we get moments like this very often in SEO, where there’s such a big change happening.
Second, closer to home — in the trenches at Candour — we’re currently making a series of short videos for our clients to talk to them about AI, AI search, GEO, AIO, whatever you want to call it. The reason is that many of our clients are now being actively approached by people telling them, “SEO is old hat — you need to be doing GEO or AI search now.”
So, we’re being more proactive in communicating with them about the research we’re doing, and showing them that, at least in my opinion, it’s not an entirely new thing. That’s something we’ve had to start doing that we haven’t had to do before.
Gianluca Fiorelli: Yes, I think your idea of using video is very smart — not just smart, very smart because it’s something you can also recycle for other marketing purposes. But more importantly, it’s useful because you can shoot a video once, send it to all your clients, and avoid having to repeat the same thing over and over. That saves a lot of energy. I think it’s needed, because maybe a video is more impactful.
I totally agree with you. I don’t think the “AI bros,” as I call them, are really saying anything revolutionary — yes, AI is entering search, and yes, it has its own characteristics, but just like every other surface used by search. We didn’t invent, for example, “TTEO” for TikTok Engine Optimization. We just call it SEO for TikTok.
Mark Williams-Cook: Sure.
Gianluca Fiorelli: AI search is so easy because it’s combined. It’s really easy to understand. We know that at the end of the day, many of the things these “AI bros” are suggesting are basically SEO 101.
Mark Williams-Cook: Yeah — and firstly, I want to give some credit to Wil Reynolds from Seer Interactive. He’s a bit of a low-key hero of mine; I really look up to him. I saw he had posted this very candid, selfie-style video talking about AI search, and I thought, “This is really good — I should probably be doing this.” That’s what kicked me to start creating similar videos for our clients.
Also, just this week — I think it was yesterday or the day before — at Search Central Live, I heard Gary Illyes say that, from Google’s point of view at least, when it comes to Google AI Mode and AIOs, it’s basically just good SEO practice. Of course, that’s Google-specific.
But there are things we’re changing at the agency. It’s not like nothing has shifted. What I mean is that the way people interact with search surfaces is different — they’re doing more of these multidimensional searches now.
With a traditional search engine, you can pretty much only search well for one thing. Whereas with AI search, you get this more conversational nature.
It’s becoming pretty clear — and we’re starting to see it in the data — that people are saying, “Oh, this is surprising — the traffic we’re getting from ChatGPT converts higher than from traditional search.” Which, again, isn’t actually that surprising when you think about it, because a lot of the funnel is happening inside that conversational environment.
So, there are definitely new factors we need to consider: how people interact with the information we publish, how we measure and track visibility, and maybe tweaking even the weightings on the things we are doing for SEO.
But, as you said, the actual core activities that we are doing and I can foresee us doing remain the same for SEO — whether it’s making sure a site is technically clean, producing high-quality content, or getting coverage. They’re the same things, but perhaps weighted slightly differently.
Gianluca Fiorelli: Yes, totally. And a friend of mine once said that what has changed in our work is, obviously, what you mentioned — our own research behavior. Sometimes we talk about “people” and “searchers” as if we aren’t also people and searchers ourselves.
I’ve noticed in my own habits that I’ve changed the way I search online quite a lot. My friend pointed out something I’ve always found interesting: what has really changed is the length of a search session.
Once — even before the zero-click phenomenon, when Google was already trying to keep people inside search — the process was different. You would clarify your question in your head, or maybe through offline media, before you ever touched the search bar. Then you’d go directly to search for a specific answer.
Now, people search inside the search experience itself. And yes, what you said about ChatGPT fits perfectly: the value of a click there is often higher. The clicks are fewer compared to Google — and I think they’ll always be fewer — but that’s because people have already had a long conversation before they click.
And if they’re lucky, the link they click isn’t… hallucinated — which is a new problem we have to think about. I never imagined I’d have to add “hallucinated broken links” to the list of tasks for my clients to redirect. At least one-fifth of the links ChatGPT has given to my clients — I don’t know about yours — go to a 404 page.
So yes, I totally agree with what you said. And I think you may have already had a sense of this longer-session behavior when you created a tool like AlsoAsked. You saw how these expanded queries — new potential questions and paths in the search session — were pointing in this direction.
People Also Ask in the Age of AI Search
Gianluca Fiorelli: How do you see the future for a tool like yours, especially for a feature like People Also Ask, in a context where Google seems intent on bringing more and more AI into the classic SERP?
Mark Williams-Cook: I’ve been using People Also Ask data for over 10 years now. I remember clearly the first time I showed what we were doing with that data on stage — it was in 2016, at a local conference called Talking Tech. I was speaking about SEO, and at the time, we had a command-line tool version of what would become AlsoAsked. It did some data extraction and similar tasks.
After that talk, I was quite surprised by how many people came up to me saying, “Oh, this is a really interesting tool — can you share it with us?” So I said, “Yep, fine,” and shared it. Back then, the script we were running was in Python. And then I started getting lots of support requests — people struggling to install the right version of Python or to get the required modules working. There was definitely a barrier to entry. That’s when I realised there was space to turn it into an online tool.
The reason I’d been using the data in the first place goes back to 2014 or 2015, when I saw a talk at SMX West by Paul Haahr. He was talking about how Google internally measures the success of their search results — which I found fascinating. My thinking was: if I can understand more about how Google judges its success, maybe I can predict the direction they’re heading and what kinds of things they’re looking for.
One thing he spoke about was TTR — Time to Result. Google defined it, roughly, as the amount of time between a searcher having an intent they want fulfilled and how quickly Google can deliver that result.
My original logic was: the People Also Ask questions that are surfaced are the closest in what I’d call “intent proximity.” In other words, if someone asks this question, these are the most likely follow-up questions they’ll ask next.
It’s actually quite detailed. You start with your first four questions, and when you click on one, it opens up like a concertina of more questions. But the questions you get are different if, instead of clicking on a question, you just re-search that question in Google. That’s because it represents a different starting point in terms of the searcher’s knowledge, which I found really interesting.
There are quite a few patents around this idea, where Google tries to gauge the skill level of the searcher in terms of how technical the information should be, how in-depth it should go, based on their previous searches, and the types of searches they’re using, and the behaviour of when specific words are used.
There’s a lot of nuance in that data. To me, it felt like the manual version of AI search: if someone is researching a topic, you can predict the next set of questions they’ll need answered to complete their journey.
Now, with AI search and query fan-out, that process is automated. Someone asks one question, and Google — in the background — determines the other relevant pieces of information they’ll likely need, and then ties it all together into a better experience. If they’re measuring this on a Time to Result basis, that’s going to score better for the user.
And that’s definitely a trend I’ve seen across technology for many years: if you reduce friction and give people a comparable outcome with less effort, that’s the thing that wins in the long term.
As for your question about how I think this will last — that’s a really interesting one. In my head, I picture it as a two-stage thing. Right now, we have LLMs — whether we’re talking about Google’s AI Mode or ChatGPT — and we know, absolutely, that Google does this. I’ve also been told, by people I trust very much, that companies like ChatGPT are buying up link graph data to help with their initial pre-training and to weight the data they pay attention to.
Google, of course, has 20-odd years of experience figuring out who they should listen to on specific topics. And all of this works on the basis that a link graph exists — essentially an expression of trust and popularity on the web: people value this content, so they link to it. From Google’s point of view, without a shadow of a doubt, a lot of this relies on Google’s click data from their SERP. However you want to think about it, much of this depends on observing how people interact with their results and how satisfied they seem with them.
So, what happens if a large share of people move to AI search? Imagine 80% of searches happening entirely within one of these models. That means users aren’t visiting websites, they’re not clicking on traditional search results — which have a lot of parity and they’re nowhere as personalized as AI answers — and they’re not linking to websites anymore.
So the traditional metrics we know — and yes, you can argue about spam or quality issues — generally work well at scale. But those signals will essentially decay over time and become less useful. And we’re left with, from a model’s point of view, instead of a link graph, we’ve got like a language graph of how codified language fits together.
The next obvious move I’ve seen is that some of these LLM companies are talking about releasing their own browsers. I can only imagine that’s because they want a way to measure interaction data. At this stage, I think the kinds of questions surfaced by People Also Ask will probably change, because that data is still generated from traditional searches.
My last piece of that puzzle is that I think this shift is still quite far away. I don’t believe anyone yet has a working solution for getting an LLM to produce consistently good answers at scale without relying on all the metrics and link graph information from the rest of the web.
I’ve got more to say on that, but I won’t go on and on about that. That’s where I think we’re heading. For now, I believe this data will remain useful for quite a few more years.
Mark Williams-Cook: And, as I’ve mentioned before — and as you pointed out in one of your emails — the other interesting thing is that people aren’t just using LLMs as search engines. They’re also using LLMs as mini tools — for example, to do quick sums. One of the top reported uses of ChatGPT, according to the Harvard Business Review — agree with it or not — is people getting therapy from it, actually talking to it. That’s wild to me.
This is a whole different scenario. If you’re doing SEO for someone who provides therapy, the question is no longer “How many clicks did we get?” or “How can we drive traffic?” ot “What content should we create?” For some people, that service has effectively been replaced. I won’t argue whether that’s a good or bad idea, but the fact is: people who might have had therapy from a human are now doing it on ChatGPT. In those cases, the very notion of a website becomes irrelevant.
And I think there will be other uses of AI that will simply take over entire slices of an industry. For me, that’s a much bigger and wider issue than “Did we get a click or not?”
Gianluca Fiorelli: Well, yes. I mean, people were already using “Dr. Google” before. But when I think about people using AI as a therapist, my concern is: how many hypochondriacs are going to emerge in a world already facing serious mental health challenges?
But yes — when I think about People Also Ask, I wonder if something similar could happen there. Don’t you think the follow-up questions we see in AI Mode — and that we saw tested in SGE last year — could lead Google to integrate the People Also Ask concept into the AI surface?
Mark Williams-Cook: Yeah. One of the things I learned from the DOJ documents was that features like Google’s People Also Ask function as a feedback and validation loop for their search results. So, for example, if you have a broad search where the intent isn’t clear — say someone searches for “GM,” do they mean General Motors, or genetically modified? You haven't got any more information.
One of the things People Also Ask boxes do is help Google understand where the majority of a query’s intent lies. So, if most people are clicking on results about General Motors, Google can automatically adjust its systems to treat that as the dominant meaning for the query.
In terms of follow-up questions in AI Mode — for sure, that could be another way they adapt the system. I think they’ll have fewer issues identifying intent in conversational search because of the way those systems work.
When we use traditional search, we take a mental step — translating what we want to find into a search term. Everyone has their own level of “Google-fu,” if you like. Some people are excellent at crafting queries to get exactly what they want, and others are less skilled. That’s something we learn through experience with a search engine. And that also kind of goes out the window with conversational search because it’s our natural way of communicating. And you can be a lot more specific — that’s one of its biggest strengths. Knowing the right questions to ask is, I think, a fascinating concept when it comes to information foraging and discovery.
For example, if I’m moving house for the first time and start searching for solicitors and conveyancers, I might not know much about that space. In that case, it’s really helpful as a searcher to see “This is what other people have asked.”
At the moment, I would think and I’ve got a test lined up for this, is that PAA data probably aligns quite closely with both follow-up questions and query fan-out. I actually have someone doing an independent study involving PAA data for another purpose, which has been really exciting to watch. I’m not ready to share the details just yet, but one of the things on my list is to properly examine the overlap between those two data sources.
Connecting Query Fan-Out, PAA, and Follow-up Questions
Gianluca Fiorelli: Yes — in fact, this is something that came to mind earlier when you were talking about query fan-out. I think many of us — and I include myself — sometimes when we speak about things, we try to synthesize them too much.
Take query fan-out, for example. Everyone talks about it as such, but very few people go deeper and say, Okay, but when we look at it in Qforia — Mike King’s tool from iPullRank — we can actually see it has a taxonomy: expansion queries, related queries, personalised queries. Yet almost nobody talks about those distinctions.
So, when you connect People Also Ask with query fan-out, maybe those follow-up questions — including the very generic concept of Google’s Knowledge Base — are the three query fan-out types that they know people, especially in related searches, are going to ask next.
If we study these patterns, we can make better use of query fan-out and PAA follow-up questions. Because — and I think you’ll agree with me — a lot of people use a tool like AlsoAsked poorly, treating PAA questions only as a way to discover more topics to feed the content machine.
Mark Williams-Cook: Yeah, so one thing I’ll say as well — I’ve got the domain queryfan.com, and I’m hoping to release a tool next month that will essentially be an open community database of real AI prompts — not synthetic ones — and the related query fan-out that happens.
This is an experiment, and it will require people’s participation, but I think I’ve worked out a fairly good way to do that at scale. The idea is to have an online, searchable database where you can look up queries for specific things and then get an overview of the types of query fan-outs generated from them.
I think that’s one of the easiest things we can do in terms of AI search to figure out what we need to do. If we know these prompts regularly trigger search actions, and there’s some consistency in what the LLM is searching for, then we know what to optimise these pages for to show up in AI search.
And you’re absolutely right about People Also Ask data. I think a lot of people got hit during the Helpful Content Update because of how they were using it. I saw lots of sites doing what I’ll loosely call “programmatic SEO” with People Also Ask data — they’d take any topic, scrape 50 PAA questions, and then, in a basic Q&A format, list the answers and generate tens of thousands of pages.
At the time, it worked really well. In fact, my 2022 BrightonSEO talk was about this — I showed examples of brand new domains going from zero to a million visitors in four months using that tactic. But I think it’s one of the imprints that made its way into the model used for the HCU. If a site has lots of pages that fit that pattern, it’s probably a signal that it’s not a great site.
The way I’ve always used People Also Ask data is different. Sure, you can use it for a classic FAQ, but my thinking is: if someone has done this search and has this question, these are the things you should ideally include in your answer. And of course, you always need a human in the loop to give context and understand the nuance.
But it’s been immensely helpful with clients to show them, “This is the majority of the questions people have around this topic.” Whenever you’re inside a company, you develop your own biases, your own blinkers, your own terminology. For some clients, it’s been quite surprising to see that. In fact, in some cases, it’s prompted us to get them talking directly to users — because at first, they didn’t believe people were asking these questions. And it turned out they were, when they actually — shock horror — decided to speak to the people they wanted to use their website.
Gianluca Fiorelli: Yes — well, now I’m speaking as a user of AlsoAsked, and personally, yes… in the very beginning, before I really understood how it worked, I used it for classic FAQ creation. That was the typical output when using People Also Ask.
But then I realised something: People Also Ask can be seen as a sort of conversational expanded query. And I began noticing Google using expanded queries everywhere. So I started thinking, “Why not combine People Also Ask with People Also Search?”
I see People Also Ask as a way of deepening into a topic, and People Also Search as widening the topic. Then there are topic filters. And sometimes — because we’re using image search tags less these days — even image search tags act as expanded queries.
By combining all of this information, for me, it became a true keyword research. Instead of jumping straight into whatever beloved tool we’re using — and yes, we all use a lot of tools — I stopped going crazy asking a tool’s database to do the keyword research for me.
I prefer to do that kind of keyword research manually — quite manually and, honestly, painfully. But now, more and more tools are combining all of this data. I was even able to create something with the help of a friend. I’m not a developer at all — if I tried vibe coding, the world would probably collapse.
Mark Williams-Cook: Brilliant.
Gianluca Fiorelli: But the point is, Google is already suggesting what people are actually searching for — because they know what people search. So let’s focus on that and see what this information tells us about the mapping of a topic, subtopic, and sub-subtopic. That’s why I use People Also Ask so much — to better inform the topical content clusters I want to create for my clients. Now I’m also combining query fan-out for the same reason.
For example, with one of my clients — and this was a fun case because we happened to be talking about moving to a new apartment — he provides services for people in the process of moving, such as setting up electricity, internet, rental contracts, and all those practical steps when moving from one apartment to another.
He asked me, “Why? We have perfectly good content, we have a strong link profile but yet we cannot win against our competitors.” It was one of those classic super-keyword battles where the solution is hard to find.
In this case, query fan-out was incredibly useful. For me, maybe the best use of query fan-out is to say: “Okay, let’s look at the questions Google gives us — the queries it expands around this problem.”
Almost immediately, it became clear that they weren’t ranking better because, essentially, their content was saying the same thing as their competitors. If you stripped away the website templates, the substance was identical. So, let’s find a way to keep your content relevant to all these things, but also address things people were asking that nobody else was answering.
In this case, it was classic — all about customer care phone numbers. The existing content was basically: “This is the customer care number for this provider in Madrid, this is the number for Barcelona,” and so on.
But at least two or three of the questions we saw were about things like, “How long do I have to wait for someone to answer?” or complaints about robocalls — you know, when you have to press “one” or “two” to get through.
Mark Williams-Cook: Press one.
Gianluca Fiorelli: Yeah. So now we’re going to work on adding content that addresses those specific concerns. It can be tricky, because you have to actually talk to customers — ask them about their experiences — or gather a statistically significant volume of data to present accurate information. But we’re going to do it.
And let’s say — because sometimes there are these things about consensus, which is ingrained in the algorithm if I remember well — I think consensus is fine in some cases, but too much consensus can actually make you weaker in terms of visibility and usefulness. If your brand is less well-known and you’re saying exactly the same thing as a bigger, more recognised brand, why should people choose your site over theirs, the one they already know better?
So you have to try to be different. That’s why I think features like People Also Ask, query fan-out, and all these tools Google provides are most valuable when it comes to the classic activities we tend to do.
Mark Williams-Cook: You mentioned—because I know you were really interested in them—and I never actually learned the exact name of that feature, the little bubbles at the top, the filters in search. Are they still live? Because I don’t see them anymore in the UK.
Gianluca Fiorelli: No, they’re still live. I think it’s a classic feature, but Google is showing it less often now. It’s a topic filter, and it’s still there—just not as prominent. For instance, I see it more in Spain and Italy, less in the US, and less in the UK.
Another feature—though not as famous as many others—was “Things to Know.” I thought it was fantastic, because when it appeared, it essentially gave you the taxonomy you should follow when creating content. It’s based on taxonomy. But “Things to Know” has pretty much disappeared now.
With the latest June-July core update, for one client, I actually saw a big jump in visibility—as if Google was bringing this feature back again. Let’s see what happens. We know some features have a short lifespan, but Google hasn’t yet removed the official page about topic filters.
Mark Williams-Cook: So it hasn’t reached the great Google graveyard yet.
Gianluca Fiorelli: When they decide to retire it—or officially mark it as deprecated—that’s when it’ll be gone for good.
Mark Williams-Cook: Outside of traffic considerations, I’ve always found those Google features super useful, because they give insight into how Google is thinking about and categorizing things. From a data perspective, I’ll dive into those before even looking at how external tools try to classify searches or intent.
Gianluca Fiorelli: Yes, totally. What I usually do, after collecting all this information, is run it through tools like Keyword Insights to speed up clustering. I can do clustering manually without relying on tools, but sometimes I like to compare my own clustering work with the results from a dedicated tool. That way, I can spot where I might improve, or where I’ve noticed something the tool hasn’t—or vice versa.
Mark Williams-Cook: I really like that about your approach, and what you said about using a whole range of keyword tools. We do the same, even though we made Also Asked. Google Suggest data, for example, is useful—at least in my opinion—at a different hierarchical level on a website. It’s more about the broader topical level, whereas PAA data is much more in the weeds—it’s about the specifics of what an individual page should cover. We’re a Keyword Insights customer as well
I’m a big fan of anyone who’s doing what you said: combining manual work with tools. There’s huge value in that. I’m very much in favor of cross-comparing data from different sources, rather than just saying, “This is the clustering tool” or “This is the intent or keyword tool.”
Gianluca Fiorelli: Yes, exactly. It can sometimes be quite expensive for a freelancer.
Mark Williams-Cook: Right.
Gianluca Fiorelli: But it’s worth it. It’s something I think is necessary to do. My manual approach comes from the early days of SEO, when almost everything was done manually. Back then, we used Xenu for crawling.
Mark Williams-Cook: I remember Xenu Sleuth!
Gianluca Fiorelli: For me, it’s a way to learn by doing. It’s like when I say: “Don’t just look at the SERPs superficially. Really look—put your eyes on the SERP and start clicking through.”
It’s like doing a user experience test—you can discover things that are right there, yet no one’s paying attention to them. For example, take the search menu. We tend to take it for granted, but I was surprised to realize that depending on the order of the options shown in the search menu, you can understand Google’s perceived search intent.
In Western countries, we read from left to right, so naturally, Google places the most important or promoted option on the left—except for AI mode in the U.S., which is locked on the left because Google wants to push it. Usually, the first option is “All,” followed by the one they believe is closest to the search intent. It could be “Shopping, ”“News,” or sometimes even just “Web”—the classic web results for people who simply want links.
Mark Williams-Cook: Blow the dust off the web.
Prompt Tracking vs. Brand Visibility
Gianluca Fiorelli: Yes! And there’s another thing that’s been making me scratch my head. I wanted to talk to you about it and hear your opinion. More and more, I’m questioning why we—whether as SEOs or marketers—need to know the exact prompts people are using in AI search. Honestly, I don’t care.
Here’s why: many times I’ve tested this myself. I can enter the exact same prompt into an LLM, also AI Overviews or AI Mode—and I’ll get a different answer each time. It depends on the moment I do the search. For example, my son, whom I sometimes use as a tester, can use the same conversational query in ChatGPT on his own computer, and the answer will be different from mine. That’s because his ChatGPT session has a different conversational history or memory than mine.
So, what should we know and base all our data-driven marketing strategies on for understanding the visibility of specific prompts? What do you think?
Mark Williams-Cook: Yeah, I understand why people want to do it—they’re trying to understand their visibility. But as you rightly point out, one of the challenges is that the output from LLMs, when you’re just interacting with them on the web, isn’t deterministic. Even if the same person asks the exact same question at the exact same time, they might still get a different answer.
I’ve seen tools that measure visibility and do all kinds of weird things—like running the same prompt a hundred times and then tracking the most frequently cited sources to figure out a sort of “bell curve” of token probability. So I understand the motivation, but I’m not sold on it yet.
So we haven’t got a partner or a specific tool we use for this, because I’m not sure it’s a good long-term bet. I was having this discussion with a SaaS provider this week—they have a massive database of millions of prompts used in ChatGPT, and they can tell you whether you’re cited or not.
I asked them where these prompts come from—are they real or synthetic? They told me they’re from their search database, based on queries people are making in Google. And I said, “Obviously, people aren’t using the exact same queries in ChatGPT as they do in Google. Are you telling me that the results are very similar?” And their answer was an unqualified yes. They claimed it doesn’t matter if you phrase it in a more conversational way—you’ll still get similar citations as if you just entered a basic search term. So, first of all, I’m waiting to see the data on that, because I’m very aware it’s in their interest for that to be true.
That being said, there’s one piece of research I mentioned earlier that I’m particularly interested in—specifically, for things that trigger more augmented generation—how different are the queries that get sent back to the search engine? That’s something I don’t know yet, but I’d really like to find out. As an industry, I think it’s worth trying to understand these things, because we need to figure out if they’re actually useful.
And then there’s another question I don’t think anyone can answer right now: “How much personalization actually happens?” I did a live experiment today during Office Hours SEO. They were talking about ranking in ChatGPT, so I had a short conversation with ChatGPT and casually mentioned that I was vegan—just as part of the discussion. (I’m not, by the way.) Then I asked, “Hey, I want to start running. What are some good running shoes?” And it immediately replied, “Oh, Brooks is a great brand because they don’t use animal products in their shoes.”
That was interesting to me because clearly it had retained that memory context and connected it to my potential purchase preferences. So the searches—or the brands—it was surfacing came from inside the model itself. It made that connection.
A friend of mine, Myriam Jessier, has done some really good work and research around this. If you Google “brand quadrants Myriam Jessier”, you’ll find it. It’s about what people understand about your brand, how that is echoed within AI search, and how you can improve that and make sure those models understand you, your brand, and the details about it.
Mark Williams-Cook: It doesn’t worry me as a concept, because the same number of people—taking my trainers example (I don’t know why I always use trainers!)—are going to be looking on the internet to buy trainers before or after AI, and the same amount of money is going to be spent.
And yes, I may get fewer clicks on my website that specializes in vegan running shoes—but that’s because I’m no longer being sent people who wouldn’t buy them anyway. So I’m open to people doing the research, but I’m not integrating it into workflows or making decisions on it yet, because I think there are too many unknowns.
It’s hard to have this confident—maybe, like you said earlier, “AI bro”—approach of “you need to know the prompts so you can do this,” because when I’ve had conversations with these people and started picking it apart, they don’t know either. So I’m not prepared to go to clients and say, “You need to put resources and stake things on this yet”.
Lessons from the Google Exploit: Site Quality Score
Gianluca Fiorelli: Right, I totally understand you, because I’m in a similar situation myself. So, I want to move quite quickly—but meaningfully—into the big discovery you made this year about the Google exploit. I don’t want another long storytelling or case study, I just want to ask you one question: “What is the most relevant thing you learned from that discovery? If you had to choose just one insight that really made you realize you hadn’t fully understood it before—what would it be?
Mark Williams-Cook: For me, it changed how I view SEO a little bit. Looking through that data, it became very clear that there are qualifying rounds you need to go through in Google just to be eligible to rank for something.
I think the most important thing we discovered in that exploit was the site quality score. I actually spoke this year to an ex-Googler who had worked on it, and he said he was shocked to see that leak—because it was, in his words, really top-secret stuff. The way that score worked was very brutal: if you scored below a certain threshold, you wouldn’t appear in certain places and features or whatever. The types of things where featured snippets were generated, to me from the data, looked like once you had made that qualification run of sites, the things being done were fairly standard, basic—nothing mega-smart—information retrieval, just matching to the query.
So what I took away from that is, I’ve experienced lots of SEOs saying, “Oh, we haven’t got this ranking or this featured snippet,” and they’re immediately focused on the content. What I learned from that is that your content—especially now—is absolutely delivered in context to the vessel to your website.
You’re measured on a bunch of factors related to that website, to that vessel. And it doesn’t matter how good your content is or how many experts you have—if you’re not scored correctly, you’re not even in the race to rank.
That’s something I look at now with clients. When they come to us and say, “We’ve lost ranking,” or “We’re not ranking for this,” and they’ve worked so hard on the content—a lot of the time, it’s not the content.
Gianluca Fiorelli: It’s like making an assassin create a jump of fate. Can’t we find—although I know they are very distant from one another—all these signals you mention? It’s not really just about your content, but about all the environment wrapping the content—so, substantially, the website and the signs that the website is bringing.
Isn’t that, essentially, the engineering way of explaining to a machine what E-A-T is? I’ve always thought E-A-T is not a checklist. A good website usually has these signs. That’s why I always talk about semiotics—because, for example, a good newspaper usually has the name of the author clearly displayed, a good e-commerce site always presents information about payments, about whatever. So, cannot these things, somehow, be the two faces of the same coin?
E-A-T is the concept you explain to business owners, because these kinds of things are not usually said in the developer.google.com subdomain, but rather in the google.com/keyword official website. And what you’ve discovered is substantial, because you cannot just directly translate these concepts and signals. You have to find associations—things that can represent these concepts—for the machine. Can it be something similar? That’s what’s happening? Just brainstorming.
Mark Williams-Cook: Yeah, take site quality score as an example, right? There was a fairly clear patent called Site Quality Score, and it described how it was calculated and estimated. It was calculated primarily with user click data, and if they had no user click data, they would build what they called a “phrase model” of the content, compare that to phrase models of known scores, and then make an estimation based on that.
By its nature, AI text generation—versus old-school text generation like Markov chains and Mad Lib article spinning that didn’t statistically look like good content from a phrase model point of view—did look like good content.
The patent made it clear how site quality score was calculated, and that was also reflected in what we saw—because we had the actual scores. For instance, the score for www.amazon.com was actually fairly average, right in the middle of the bell curve for “just an ordinary website.” Amazon’s subdomains, like Flex and its logistics subdomain, which are very specific, scored higher. And some of the top 0.1% of site quality score sites were things like the FAQ section on the library subdomain of a university. That’s because the way those sites were found was really only through search—people were specifically searching for them, and that made up a high percentage of their search traffic.
I think that’s overlooked when it comes to ranking in Google. If, for whatever reason, you lose some visibility, and Google sees that nobody is actively searching for you—if you don’t appear in search, you just don’t exist—then your site is dead in the water. Which is why people talk about brand and brand search now; brand search, or brand, has become a much bigger topic.
But I think that’s a hugely important signal now. I used to have this discussion 10 years ago, because this used to be a thing that seemed to happen in Google: with a new website, you’d rank around 40th or so, and then Google would kind of bounce you up to rank really nicely for maybe a few weeks—then bounce you back out.
I don’t see it working quite like that anymore, but that still feels to me like a basic site quality score check: “Are we getting the click-through rate we expected? Are we getting the SERP interaction signals we expected? If we remove it from search, do people try to find it anymore—or do they just not care?”
So, in terms of E-E-A-T, how do you tell people to do that from Google’s point of view? That’s where I think E-E-A-T comes in. And I agree with Google—they’ve told us it’s not a checklist, it’s not an E-E-A-T score, it’s a combination of a whole myriad of things.
And it’s a mindset where you’re building something the algorithm is aiming for—you’re not aiming at the algorithm. That’s an easy trap to fall into when you start SEO: wanting to understand how the algorithm works and then basing your strategy on aiming at it, before you appreciate that the algorithm is constantly moving.
You’re aiming at a moving target, so you need to aim ahead. You can get really good results doing that—if you figure out something that works particularly well right now, and everyone starts doing it. But then you’re just setting yourself up, because inevitably, if it works too well, at some point it will stop working.
And then everyone’s left shrugging, wishing they had been working—like you said—on the quality, on building trust, on building brand. Those are the things Google can’t take away from you.
I think a great question to ask, outside of things like technical SEO where you are doing things for bots, is: “If search engines didn’t exist, would I still be doing this thing?” And it’s a great way to know you’re building equity. If you’re making your site faster, for instance—yeah, I don’t care whether Core Web Vitals is a ranking factor. Even if Google didn’t exist, I’d still want my site to be faster. Therefore, it’s a good thing to be working on.
Gianluca Fiorelli: Yes—and that brings us back to clicks as a signal, a user signal. Not many remember, but it was more than a year ago when the SGE patents came out. And it’s also present in the documentation of AI Mode, AI Overview, and all these kinds of things. That user signal is truly important, also for Google’s AI search.
I remember very clearly the one about SGE—I think many aspects of that patent were translated and migrated into AI Overview—that wasn’t only talking about clicks. We know clicks are not the only thing—it was describing how, in the answer, people were scrolling down and really looking at certain parts. It’s through Chrome, 100%. If people stopped on a piece of the answer, or moved the mouse pointer to the part they were reading—which we do instinctively and unconsciously—Google could understand that. Not in terms of ranking the website itself, but ranking the part of the answer that matters most. They could then recalculate the answer, and of course, that has consequences for the website where that chunk of content came from.
So yes, user signals are surely the most important thing. And that’s why—going really high level, in terms of the politics of the internet—the biggest question is whether Google will still be able to own Chrome in the future because the browser wars are going to be the most relevant wars of this moment.
Getting to Know Mark, Outside the SEO Bubble
Gianluca Fiorelli: So—one hour already. Time runs fast. Let’s stop talking about SEO and start talking about you.
Mark Williams-Cook: Oh, dear!
Gianluca Fiorelli: Just a quick question so people can get to know you better—let’s talk about your love for dogs.
Mark Williams-Cook: Ah, the dogs.
Gianluca Fiorelli: From what I’ve read and seen—especially from your side business—it seems to me you’re quite a dog person.
Mark Williams-Cook: Yeah… to be honest, it’s a bit hard for me to talk about right now. I lost my dog of 10 years just a week before last, so it’s been really tough. I’m dog-sitting at the moment for a little girl named Pippi—she’s my business partner’s dog. He’s in Australia for a wedding, so I’ve got her for a month, which is great.
We had Snoop, who we rescued from a shelter when he was five. I had him for 10 years. He’d been given back to that shelter two or three times, because he needed some work. But we gave him the work. He even became part of the Candour branding around the office. One of his quirks was that he couldn’t really be left alone.
So I essentially had 10 years with this guy—at home with me, coming to work with me, sometimes sleeping on my bed, even coming to the gym with me. It’s been a big, big shift now that he’s not here.
Gianluca Fiorelli: Yes, I understand. The same thing happened to me a few years ago—though I’m more of a cat person. It’s something that, if you’ve never had a pet, you probably can’t truly understand.
But let’s switch gears and return to something more technological—in this case, video games. What was the first video game you ever played?
Mark Williams-Cook: The first game I played was on the ZX Spectrum, which our family had. It was probably something like Manic Miner, which might mean nothing to most people, but it was a super-basic platform game. You’d put in a cassette, and it would take about 20 minutes to load the game.
And back then, there was no such thing as a save game. You’d play, get your three lives, and it didn’t matter if you were an hour into the game—once you were dead, you were done. You had to start all over again. I think those early games definitely taught me some lessons about patience, perseverance, grit, and just trying again.
Gianluca Fiorelli: Yes, I remember that era well. I also had a ZX Spectrum. It was 32k memory. That was a joke. Now, talking about role-playing games or similar genres—if you had to choose one… what’s fantastic about video games is their world-building. So, let’s imagine different game worlds—say, the world of Prince of Persia, or Warhammer 40K, which you know I like a lot. Maybe FIFA Soccer, or the world of Cyberpunk 2077. What kind of universe—flipping from reality into a virtual world—would you like to experience at least once?
Mark Williams-Cook: About video games… I like them for weird reasons. I tend to play simulation-type games—super nerdy ones like RimWorld and Dwarf Fortress. They’re basically big simulators where you control small things, but within very large, complex systems.
That’s actually why I’ve integrated them into some of my SEO talks. They remind me of the work I do with search engines and Google—another big, complex system that I have some understanding of. I find it deeply satisfying to figure out how to influence those systems, and there’s like an optimization outcome for me.
Yeah, I made that kind of whole joke around—I don’t get why people play games like Connect 4, because they’re “solved” games. I mean, I know that if you start in the middle column, it’s something like 42 moves until you can force a victory. Same with noughts and crosses—it’s a solved game.
That’s why I really like complex system games. If we played a board game together, it would, unfortunately for you, be one of those where we’d spend an hour reading the manual and then another five hours actually playing.
Gianluca Fiorelli: Well, you should know—when I was much younger, I used to play war games with my uncle and his friends. The classic kind—recreating battles like Waterloo, trying to make Napoleon win for once, at least fictionally.
Yes, I also love simulation games. I think I passed that interest on to one of my sons. He doesn’t usually play casual games—maybe occasionally with friends—but he spends a lot of time on deep simulations, like Hearts of Iron IV.
So, the really deep simulation of the Second World War, with a huge community and an endless number of models—it’s the kind of game that makes you think, Wow, this is really cool. And I totally get it. They can be an escape, but they still train your brain.
It’s similar to what happens to me when I paint. Painting isn’t just splashing colors on figures—at least not if you don’t want to be the Jackson Pollock of miniature painting. It requires methodology. You need to understand it, to know what volumetric light is, and how to place it.
Anyway, I think it’s time to let you go and get back to your work—or whatever you want to do, since today is Friday when we’re recording.
Mark Williams-Cook: Yeah, it’s quarter to five here on a Friday, so I might just make my exit after this.
Gianluca Fiorelli: Fantastic. I wish you a wonderful weekend—it’s been a pleasure. Let’s promise ourselves that in the next six months, or a year, we’ll have another conversation and see what SEO has done to us by then.
Mark Williams-Cook: Let’s do it! No problem, thank you so much, I really enjoyed that.
Gianluca Fiorelli: And thank you to all of you, our viewers and listeners. Let me do the classic sign-off: remember to subscribe, click the bell, and give this video a like—because this episode with Mark really deserves it. Thank you, and see you in the next episode.
Podcast Host
Gianluca Fiorelli
With almost 20 years of experience in web marketing, Gianluca Fiorelli is a Strategic and International SEO Consultant who helps businesses improve their visibility and performance on organic search. Gianluca collaborated with clients from various industries and regions, such as Glassdoor, Idealista, Rastreator.com, Outsystems, Chess.com, SIXT Ride, Vegetables by Bayer, Visit California, Gamepix, James Edition and many others.
A very active member of the SEO community, Gianluca daily shares his insights and best practices on SEO, content, Search marketing strategy and the evolution of Search on social media channels such as X, Bluesky and LinkedIn and through the blog on his website: IloveSEO.net.
stay in the loop