Ryan Jones and Gianluca Fiorelli

Mastering SEO Testing in the Era of AI Search | Ryan Jones

30

min read

Ryan Jones and Gianluca Fiorelli

Mastering SEO Testing in the Era of AI Search | Ryan Jones

30

min read

Ryan Jones and Gianluca Fiorelli

Mastering SEO Testing in the Era of AI Search | Ryan Jones

30

min read

A new episode of The Search Session is here. I’m your host, Gianluca Fiorelli, and today I’m joined by Ryan Jones, a digital marketing leader based in Nottingham.  

We tackle one of the most pressing questions in SEO today: how do you run reliable experiments and measure real success when search is being reshaped by AI? 

From testing visibility in Google AI Overviews and LLM platforms like ChatGPT and Gemini, to building topical authority through internal linking and content hubs, this conversation is packed with actionable insights for SEOs who want results beyond rankings.

What you’ll learn

  • Evolving SEO testing for AI search: SEO testing is adapting to LLM platforms like ChatGPT and Gemini by relying on incremental traffic from AI tools as measurement, making small gains meaningful and worth testing.

  • The danger of using AI content too quickly: publishing lots of AI content without testing it first can hurt rankings, which makes small-scale testing before investing an essential strategy. 

  • The limits of blindly following SEO best practices: effective optimization requires adapting strategies to each website’s context instead of blindly following generic best practices or case studies.

  • Reframing failed SEO tests for leadership: why the best approach is to show that small-scale testing saves time, effort, and money, prevents wasted investment, and demonstrates clear return on investment.

  • Ensuring reliable SEO test results: algorithm updates, shifting SERP features, and insufficient data can distort results, making variable isolation and statistical significance essential for reliable conclusions.

  • Testing visibility in AI Overviews: there is no clear formula for appearing in AI Overviews. But if your content is featured with a clickable link inside AI Overviews, it can significantly reduce traffic loss, and sometimes even increase clicks.

  • The value of internal linking in SEO: internal links consistently improve rankings, traffic, and user experience by distributing authority, enhancing crawlability, and guiding users to relevant content.

  • Evaluating internal link impact: links from high-traffic pages and strong topic clusters outperform low-authority sources, while effective topical hubs rely on meaningful lateral links guided by expertise, not overlinking or weakly related connections.

  • Owning the full search journey with topical hubs: building authority by covering everything from informational content to purchase and post-purchase stages, not just ranking for isolated topics.

  • Evaluating the true meaning of SEO tests: while traffic and click-through rates are standard benchmarks, conversions and actual business growth are the real indicators of testing success.

Discover the full conversation and take away practical insights on SEO testing and building authority through smarter experimentation.

Topics covered: SEO testing · AI search · LLM visibility · Google AI Overviews · AI Overviews optimization · internal linking · lateral linking · topical hubs · topical authority · conversion-focused SEO · SEO best practices

About the Guest

Ryan Jones

Ryan Jones

Marketing Manager at SEOTesting

Ryan is a digital marketing professional with over 10 years of experience, specializing in SEO, content, growth, and product marketing. 

Since 2023, he has been marketing manager at SEOTesting, a platform that helps SEO professionals run controlled experiments and make data-led decisions using Google Search Console data.

Ryan regularly speaks at industry events and contributes articles to the SEOTesting blog, where he writes about testing strategies, measurement, and making SEO more accessible through experimentation.

Transcript

Full conversation between Gianluca Fiorelli and Ryan Jones.

Gianluca Fiorelli: Hi, and welcome back to The Search Session. I'm Gianluca Fiorelli, and today our guest is one of those people we often refer to without really knowing his face. You probably know his face if you are based in the UK, but if you are in the US or in Spain, you probably know the tool behind which this person is working and operating as a marketing manager, which is SEOTesting.

I'm really glad he accepted to be my guest here, because he always shares very interesting things. As I was saying, he is the marketing manager of SEOTesting, and he also defines himself as a digital marketing leader, SEO, content, growth, product marketing, and more.

This person is Ryan Jones. Hi, how are you doing, Ryan?

Ryan Jones: I am very well, thank you, Gianluca. Thank you very much for having me on. It's a pleasure.

Gianluca Fiorelli: Wonderful voice. Very different from mine. It sounds like you were born to participate in podcasts with that voice.

Ryan Jones: Thank you very much for the compliment. I appreciate that.

Gianluca Fiorelli: Okay, so let's start with the classic question I always ask the SEO guests of The Search Session: how is SEO treating you lately?

Ryan Jones: For me personally, it's treating me very well. There are lots of new things for me to be learning, obviously, with the AI that’s coming into it now, there are plenty of new avenues for me to look at and for me to be testing. I’ve always thought of SEO as an industry where we can never just rest on our laurels. But in terms of how it has treated me, I have absolutely no complaints.

Gianluca Fiorelli: That’s great to hear. Yes, there are a lot of things we are learning. That’s one of the main reasons why The Search Session was proposed by me to Advanced Web Ranking, and Advanced Web Ranking was very happy to participate as the producer of the video podcast. 

And, obviously, because even if AI was already there somehow as a sort of game before 2024, last year it totally exploded in the SEO world.

SEOTesting is a classic tool that allows SEOs to conduct testing in terms of SEO. But before talking about the importance of testing, I’m curious how the eruption of these new surfaces, like ChatGPT, Gemini, and Perplexity, is reshaping the same concept of testing. Before, testing was very focused only on Google Search and not even on AI Overviews. It was really focused on classic Google Search.

Testing for ChatGPT, Gemini, and Other LLMs

Gianluca Fiorelli: So, in the concept of testing, what kind of nuances have you started to test, first as a tool maker, to see the changes? Like, “Okay, let’s test this, but for ChatGPT, let’s see if these kinds of changes are going to improve our visibility, and so on.” How do these kinds of things make your own concept of testing, and also the tool, evolve?

Ryan Jones: Yes. For us, we started with the same basic principles that we used when we first launched SEOTesting as a tool. We now have a feature within our tool that allows us to do exactly that. We can make a change on our website and tell the system, “This time we are targeting ChatGPT, we are targeting Gemini, or we are targeting a combination of these LLMs.”

Rather than measuring traditional SEO metrics like clicks, impressions, average position, and that kind of thing, what we are doing now is measuring sessions that come from these LLMs, such as ChatGPT and Gemini, and so on.

There are a couple of caveats to that. One is that referral data from LLMs at the minute is spotty at best. Sometimes they don’t even send us referral data; sometimes it is just listed as referral traffic or direct traffic, and that is incredibly hard to measure.

However, we are finding that, especially for larger sites, they are still sending enough referrals each day to make a difference. That means we can change something on the site and see the impact that change has had.

Anecdotally, we launched a content refresh not too long ago. For the time period of the test that we had set up, we had, I think, two sessions from ChatGPT, where it had picked it up in some conversation, and someone had clicked on a particular blog post link, and we had the session data from that.

And now, for the same test period, we’re at, I think, over 16 clicks from ChatGPT. These are small numbers, but in my mind, they are still relevant numbers. We can say that the changes we made to this blog post have worked. We are seeing more “traffic” from ChatGPT. 

In terms of overall visibility, at the moment, we only work on sessions because it is so difficult to measure visibility in these LLMs. We see studies from Rand Fishkin and SparkToro all the time about having to run the same prompt more than a hundred times just to get the same list in the same order and things like that.

I am hoping that in the near future, especially with LLMs starting to launch ad platforms and things like that, we will get more accurate visibility tracking. But we will see. For now, we are doing the best we can with what we have.

While referral data is "spotty," seeing how your brand surfaces in AI-driven results shouldn't be a guessing game. 

Advanced Web Ranking now offers specialized AI Visibility tracking, allowing you to monitor how your site performs across these new search surfaces alongside traditional SERPs.

Bridge the gap and track your AI visibility with AWR. It’s free for 7 days.

Gianluca Fiorelli: Yes, this is something we are all hoping for. Maybe a good proxy could be using the new Bing Webmaster data from Copilot. We know it is not really correct to use it as a proxy for all the LLMs, but it could somehow give us an idea about the direction of a solution or of a specific test, especially if we compare it with AI Overviews.

Because for the LLMs themselves, yes, probably they are going to present us with some kind of analytics or some kind of LLM search webmaster tool. But considering that ChatGPT is asking for a minimum of $200,000 to start advertising on ChatGPT at the moment, I do not know.

It seems to me that everything is becoming “fine” for big enterprise companies. Small to medium companies are substantially abandoned and left to themselves to invent, experiment, and try solutions through trial and error, because the data, if we want consistent data, can become very expensive for these kinds of businesses.

Test First, Scale Later

Gianluca Fiorelli: But I have another question. Many times, people, and we are seeing it these days and weeks, are noticing that many websites are tanking after having produced massive amounts of content with AI, for instance, in order to reach the biggest visibility possible.

I think one of the most interesting tests could be not only comparing the growing visibility in ChatGPT, for instance, but also comparing what those big changes mean for classic visibility in Google Search. I think you agree that sometimes these kinds of decisions really need to be tested very seriously before they are implemented.

Ryan Jones: Yes, absolutely. I am always a proponent of validating things before launching them fully and deciding that something is a strategy. I would much rather spend additional time running a small-scale test first to validate something, especially when we get into the realm of LLMs, particularly for smaller businesses. As you said, they are often left in a difficult position because not many can afford something like a $200,000 fee to start advertising on ChatGPT.

So I am absolutely a proponent of small-scale testing to see what works first. Our whole philosophy over the years has been to find what works and then double down on it until it does not work anymore, and then reevaluate from there.

The downside is that you can start to see what you mentioned before, this rank-and-tank method of mass producing AI content. That may work for a period of time, and then the site gets hit with an algorithmic or manual penalty, or something like that. We have seen many case studies on X and LinkedIn of this happening.

But I am still very much a fan of finding what works, doubling down on it, and using testing to guide that process.

Strategic testing requires high-frequency, accurate data to spot small shifts before they become big problems. 

Whether you are testing a "rank-and-tank" hypothesis or a new content refresh, Advanced Web Ranking’s daily ranking updates provide the granular insights needed to validate your small-scale tests with confidence.

Try AWR for free and start validating your SEO hypotheses today.

Gianluca Fiorelli: Yes, exactly. I think this is one of the biggest problems. Being greedy is quite common, not only in the SEO world. Sometimes it is the businesses or stakeholders who want to move too fast and try everything without really thinking about the consequences.

Let’s go a little more into the very beginning of testing and SEO in general. There is the classic definition we were also discussing with Will Kennard last week. For instance, especially in technical SEO, many times, even if SEOs are becoming more aware of this mistake, the process is still the same: audit the website, find the issues, and fix them.

Is technical SEO still just a checklist of fixes? 

Listen to Will Kennard on The Search Session as he challenges the audit-find-fix cycle that dominates the industry. Together with Gianluca, they’re exploring how modern technical SEO with JavaScript frameworks like Next.js should push websites forward, not just fixing them up.

Beyond Best Practices: Why Context Matters in SEO

Gianluca Fiorelli: But how would you explain to an SEO that sometimes a best practice may not be the best thing to follow when solving a problem for something specific on their website?

Ryan Jones: It is an interesting topic because many websites, marketers, and SEOs follow best practices. But there are problems with that, in the sense of how we base these practices.

What are they actually based on? Often, they come from anecdotal evidence. Something worked for one particular site, so people try to replicate it elsewhere and then wonder why they do not see the same results.

They can also be based on studies with weak correlation or even outdated information. We see this all the time on X and LinkedIn. And that can lead to wasted resources and misguided strategies.

I am not necessarily a fan of just implementing best practices everywhere. There are obviously caveats. You still need to follow the core SEO principles, making sure your site is technically optimized and that the content you produce is both high quality and relevant to your audience.

Best practices exist for a reason. But just seeing a case study and using that to define your entire strategy going forward goes back to that old adage. What works for an e-commerce site probably won’t work for a more informational site, like a SaaS site or a small- to medium-sized business with mainly feature pages and things like that. It all comes down to that problem I have with best practices.

Gianluca Fiorelli: Me too. This is something people usually learn with experience. Now that I have quite a few years of experience, one of the many questions I ask a new client is, “What is the technological stack of your website?”

Depending on the stack, I may recommend a best practice, but it might not be applicable. So we need to find an alternative way to solve the issue. And sometimes, being very honest, especially when working with an in-house SEO team, you have to prevent the debate, and you are probably suggesting something that is not best practice, so you have to justify it. 

So, usually, this could be a tip for all the people listening to us: always present what should formally be the best practice, and from there explain why it cannot be implemented. Saying, “You have these or other options in order to find a solution that can give you the same results as the best practice.” I think this is the key point. And eventually, if they do not agree, you can say, “Okay, let’s test it. Let’s test this solution.” 

How to Explain Failed SEO Tests to Stakeholders

Gianluca Fiorelli: And then, hopefully, the testing gives positive feedback, but testing does not always give a positive result or confirm our hypothesis. So, from your experience, how do you communicate the value of a failed test, especially to stakeholders? Not necessarily to another SEO, but to the people above them.

Ryan Jones: Yes, it is a really important question because, at least from our anecdotal experience, about 50% of the tests that we run, even on our own website, fail in the sense that they do not bring the results we expected.

So it is really important to have a conversation around that. The way I frame it, especially to stakeholders who do not necessarily “speak SEO," is this: “At the end of the day, we have spent time, and in some cases a bit of money, running this test. We have not seen the results we wanted, and that is always disappointing. But, at the same time, it saves you from going down a rabbit hole and investing in a strategy that wasn’t going to work.”

So I frame it in the sense of the test failed, but we have probably saved a lot of time, effort, and money in the future because we now know this does not work. That means we do not need to spend resources implementing it site-wide, because we ran this test before. From my experience, when speaking to stakeholders who are focused on the bottom line, this is the best way to frame it that I’ve come across.

Gianluca Fiorelli: Yes, I totally agree with you. With stakeholders, you should not talk about technical details; you should talk about money. So you can reframe it like this: “Okay, we did this test; it failed, but it saved us tons of money.”

Not only would implementing the change have been costly, but maybe it would have had a negative effect, especially if the test was going to give us negative results.

How to Test SEO When Google Keeps Changing the Rules

Gianluca Fiorelli: And one other thing: when someone thinks about testing, people usually think about classic conversion rate optimization, A/B testing. But what is very different with SEO testing is that you are testing in an environment that is not your own website.

You have another player, which is Google. We were blaming Google because it was changing so much, and not only has it not stopped changing and updating itself, but it has also increased the rhythm of changes that it presents every day.

So what kind of recommendation would you give to an SEO team or an SEO consultant, for instance, like me, to be able to isolate the noise from all these constant changes Google introduces? For example, changes in the SERP design, not even considering when you start a test, and the day after Google rolls out a spam update or a core update.

Ryan Jones: Yes, so I will start with the second point first. In terms of what happens when we run tests and then a confirmed Google algorithm update rolls out, whether it is a spam update, a core update, or even something like a Discover update, we know from experience that it is better to pause that test and then try to run it again later.

The reason is that you will not be able to say with confidence that the results of the test are due to the change you made. It could be the change Google made to its algorithm and things like that.

Then, in terms of looking at SERP features, it is always good to keep an eye on them. For example, if you are running a content refresh, you might have seen a blog post ranking for a target keyword in position eight and not necessarily getting many clicks.

So, you’ll decide to update the content, make it more readable, and more run-through. Before running the test, you should also note what SERP features are present. Is there a People Also Ask box that could be taking clicks? For most informational content, there is probably an AI Overview as well. So keeping an eye on SERP features is important.

Advanced Web Ranking helps you keep an eye on the features that appear for your target queries, and spot how those SERPs evolve over time. That extra context makes it easier to understand whether performance shifts are coming from your optimization work or from changes in the search results themselves. 

Try Advanced Web Ranking for free and track the SERP dynamics behind your rankings.

Ryan Jones: And the final point, and I left this deliberately to the final point because this is kind of the most complex thing to understand, is aiming for statistical significance on most tests, if you can. And by that, I just mean making sure you have enough of what you are measuring.

So if you are measuring clicks or impressions, make sure you have enough of them to give yourself more confidence to say, “Okay, we made this change, and in the pre-test period, we were averaging, I do not know, 100 clicks per day. Now, after the test, we are averaging 150. But we have enough click data to say with some certainty that this is a statistically significant result.”

You can do all the math yourself, or you can use a tool provider that does the statistical calculations for you. We do statistical calculations for all tests in terms of time-based tests. I know SearchPilot does the same; they give a confidence level for their tests as well. But you can also do it yourself; it just involves some complicated math that I am not smart enough to understand.

Gianluca Fiorelli: And me even less. 

Testing AI Overviews and Their Impact on Clicks

Gianluca Fiorelli: I have a question about a sort of test; I do not know if you are doing it. Sometimes I do it quite manually, not really as a test, but trying to find a correlation.

We know that AI Overviews are sometimes present alongside classic blue links, so is there a way to test whether having a page linked with a blue link, or consistently appearing with a blue link inside AI Overview correlates with less traffic loss or fewer clicks lost? Is this something possible to test, or if you have already tested it, what kind of data have you seen?

Ryan Jones: Yes, so Google AI Overviews have been the bane of my existence since they launched. I have run countless tests to try to get featured in AI Overviews in some way, whether that is with a blue contextual link in the actual AI Overview result, or a source link at the side.

For the life of me, we have not been able to figure out a concrete pattern. We cannot say that doing specific things in a blog post, for example, will get us featured in AI Overviews. It seems to come down to things like query fan-out, so we are now making sure that we cover different fan-out queries and things like that.

In terms of actually testing this kind of stuff out, it is certainly entirely possible to do. But maybe it is not possible with the current tool sets people are using, purely because we connect with Google Search Console for data, and as annoying as it is, Google does not give us AI Overview data.

It is all lumped in with normal Search Console data, so we cannot really be concrete there. But this is where we see the advantages of vibe coding, for instance, using Claude Code, Replit, or similar tools.

One of the things I have been doing over the past few weeks is updating some of the informational content on our site. I have built what I call an AI Overview watcher. It uses data from the SEO API, runs through it, and scans the SERP for keywords. First, it checks if there is an AI Overview present. Then it checks whether we are featured in that AI Overview. 

So you can absolutely do it, it is just more difficult and may require building your own small toolset until we get more accurate tracking from Google and Bing to see when we are featured in these AI Overviews. It is going to be difficult, but it is almost certainly possible.

You don't need to be a developer to get AI Overview ranking data. 

Advanced Web Ranking automatically scans the SERP for every keyword in your projects to identify exactly where AI Overviews (AIOs) are triggered

More importantly, AWR tracks if your site is being cited as a source link or featured in the carousel within that AIO, giving you a clear view of your visibility in the new AI-driven search landscape without writing a single line of code.

Skip the manual builds and track your AI Overview presence with AWR. It’s free for 7 days.

Gianluca Fiorelli: Yes, I was asking about this because, for a couple of my clients, I had enough volume of data. AI Overview has been present for quite a long time now, long enough to be statistically interesting for certain queries.

As you were saying, sometimes Google shows the AI Overview, sometimes it does not. But for this client, when it was shown, they were always present as a source.

This is the first important thing to notice, but sometimes not only as a source but also, as I was saying, as a classic citation with a blue link inside the answer.

So what I, with my own manual testing, as correlating the data, was saying is, "Okay, I know that during this period it was present in AI Overview, but also present in the classic search results, and it was substantially losing these kinds of clicks compared to a year before.”

Because when AI Overview was not present, the website with that page for that keyword was ranking more or less in the same position in classic search. And then, when I started noticing that the blue link was present in the AI Overview, I saw that the CTR was still losing some clicks, but much less. 

And sometimes even growing, because if a blue link is present in the AI Overview, people click on it. That is my assumption. If a classic link is present in the AI Overview, people will almost surely click on it. Because it’s natural, because it’s in the conversation with the link. It is not something added; it’s not something that you have to look for in a list and decide which result to click on, this search result or this other one, maybe based on brand perception.

So, that is something interesting that can make us understand why Google is testing these kinds of things. And sometimes saying to us that, “No, it is not true that AI Overview is not sending traffic,” is not quite true. It is sending traffic, but obviously, without us being able to see data in Search Console, SEOs are by nature skeptical.

Ryan Jones: Yes.

Gianluca Fiorelli: So, give us the data to trust you.

Ryan Jones: Absolutely.

Internal Linking Still Works - Really Well

Gianluca Fiorelli: Let’s move to another wonderful topic that I really like, because I am really fond of everything clustering and topical apps and so on, which is the topic of internal links. If I remember well, internal linking is one of your beloved topics. And if I remember, you did something in collaboration with Screaming Frog, right?

Ryan Jones: Yes, yes.

Gianluca Fiorelli: And so you were talking about, with data, how internal linking is important. It is true that this is a topic that has resurged in the past couple of years. Still, what do you see people, and SEOs in particular, misunderstanding or doing incorrectly when it comes to internal links?

Ryan Jones: I do not know whether there is a sort of incorrect thing that people are doing when it comes to internal links. I think the biggest mistake you can make in SEO, no matter whether you are working on an e-commerce site, a SaaS site, or any other type of site, is not seeing the importance of internal links.

And this is anecdotal, based on our own testing on our own site, but almost every SEO test that we have run that involved adding internal links, whether to blog posts or feature pages, has led to an increase in clicks.

So, in an environment where 50% of tests succeed and 50% fail, in the sense that they either lose clicks or do not give meaningful data, internal linking is one of those activities that, for the most part, we can say is positive. It brings more traffic, more clicks, and sometimes more revenue.

And I think there are a few reasons for that. First off, obviously going right into the SEO basics, it establishes that PageRank link equity that we all love. So it is the easiest quick win that we can have. When we publish new content, we can say, "Okay, let’s start to internally link this, so we can give it a bit of value from Google’s standpoint, before we even get onto any link acquisition from outside sources."

It also helps us—and this is where the resurgence, like you were saying, has come in the past two or three years—to establish more topical authority. So if there is a certain topic that we love to write about, we can be mindful about the internal links we create and build this kind of topical map that Google likes to see.

Then, more generally, it improves crawlability and user experience and gives users a better experience on the site. If someone lands on a blog post and we have a caveat to another topic, it makes sense to internally link to it, regardless of whether it is best practice or not.

If a user is interested in that topic, they may want to click through and read more. That sends positive signals to Google, because it shows they are having a better experience on the site. If they land on a blog post from the SERPs, click a few internal links, and then either leave or take another action, like starting a free trial or getting in touch, it shows Google they are having a good experience on the site.

And we know, from the work Mark Williams-Cook has done and from the Google leaks that we saw, that Google places importance on the experience users are having. So it is about making sure they have a good user experience.

Curious about Mark Williams-Cook’s findings on Google leaks?

In this episode of The Search Session, Mark joins Gianluca to dig into many topics, including hidden ranking signals such as a site quality threshold that can quietly determine whether a page gets a chance to rank at all, and what this reveals about how SEO really works.

Ryan Jones: So, regardless of whether it is best practice or not, if it makes sense to add an internal link somewhere to one of your other blog posts or pages on your site, just do it. Take the time to do it, because it is—I do not want to use the term "best practice," because I said before that I do not necessarily agree with best practices—but just go back to that idea. If it makes sense for the user, then do it, because it is going to give them a more positive user experience than they would have had before.

Gianluca Fiorelli: Yes. Instead of best practice, let’s call it common sense. 

Ryan Jones: Yes, let’s call it common sense. I like that.

Gianluca Fiorelli:And also the question is, if you were a user, would you click on these kinds of internal links or not?

Do Link Position and Page Authority Matter?

Gianluca Fiorelli: And talking about internal links, it is a very old concept. I am not sure if it is even formally considered anymore, but I think it is so embedded in Google’s algorithm and still has quite a big importance, which is the reasonable surfer patent. So, depending on where the internal link is placed, it has more value or less value.

Ryan Jones: Yes.

Gianluca Fiorelli: From your testing, have you seen a reflection of the value of internal links depending on whether it is a contextual link, a navigational link, or a sidebar or template link?

For instance, when we talk about products in an e-commerce site, and we have things like “people who saw this product also saw this,” all the formulas we can use to better relate products internally. Did you see some kind of ranking in the value of internal links?

Ryan Jones: I cannot necessarily speak to that purely because we have not really tested it in terms of actual link placements. However, what we have seen, and this could link to the patent or not, is something else.

This goes back to what you were saying about the golden rules or common sense to follow. When we linked—obviously, making sure the link makes sense and things like that—from top pages on our website, so pages that get a lot of traffic and rank for lots of keywords, those types of internal link placements, at least from our end, seem to perform better than links placed on lower-authority blog posts that do not get many clicks.

And we have also seen it when we created new topic clusters. And this is where I might disagree and say that the patent might not be as effective as it was. We have seen, from our own anecdotal data, that when we created new topic clusters and linked from the footer, we saw a very big increase in clicks. Now, whether that is because of the footer placement, or because we created a strong internally linked topic cluster; that is probably the more realistic explanation rather than just placing links in the footer.

But in terms of actual tests, specifically looking at link placement, we have not done much testing around that. Although it is a good idea, I might have to do it and then write a LinkedIn post about it.

Gianluca Fiorelli: Yes, keep me informed.

Ryan Jones: Absolutely.

How to Build Smarter Topical Hubs

Gianluca Fiorelli: And talking about this, you already introduced the topic, which is something we will probably repeat a lot now. Internal links are the connectors, the axis, let’s call them so, that shape a content hub, a topical content hub.

So a hub would not be a hub without internal linking. And the classic structure is from the pillar to the sub-pillar and cluster contents, lateral between sub-pillars and cluster contents, and also the way back, from the clusters to the pillar.

But where I see many topical hubs failing is when people are not able to do correct lateral internal linking because maybe they are linking to content that is part of a topical hub but not strictly related to the specific sub-niche. For instance, a cluster content could be a landing page or a blog post.

Do you agree with me that this is usually the classic mistake people make when creating topical hub architectures?

Ryan Jones: I do; I absolutely agree with you on that. And I think this also connects to what we are seeing now and what we can do with things like the OpenAI API and things like that and getting into vector embeddings.

I have not done too much work around this. We have done a little bit of work internally and seen positive results. We know that internally linking where it makes sense for an algorithm can have positive results.

But going back to what I said before, and this links to the fact that no one, in my experience, knows your site or your topic. If you are a true expert in a particular topic, no one knows that better than you. So if you are looking at your site and reading through one of these sub-pieces, and you think an internal link laterally to a landing page or another content piece makes sense, then it probably does.

You can even see that if you do a bit of user testing on your site using something like Crazy Egg or something like that. If it makes sense to you, the chances are the algorithm is going to agree with you in some sense, even if you are not going into things like vector embeddings and saying we should place a link here because of that.

It really comes down to common sense and common practice. If you think it makes sense to do it, it probably does.

The downside is that you can go too far in the other direction. You might think, "Okay, we mentioned a keyword here, so we should link to that and that," and that is where people start creating messy content hubs and overlinking between them.

So there is a limit to how far you can go. But I still go back to that idea of topic ownership. If no one knows the topic better than you, then it makes sense to trust that judgment.

Gianluca Fiorelli: And another thing I want to ask you, related to topical hubs, is another mistake that I see and that I always talk against, which is considering topical hubs only within the constraints of informational content.

I think, and I do not know if you agree and if you have seen positive results, maybe doing the same thing I do, that topical hubs are not just about owning a topic on the informational side, but owning the search journey people want to follow around a topic.

So from very broad informational content to deeper levels. For instance, if we are creating a topical hub about sneakers or running shoes, you create content about how to choose the right sneakers, how to make them more comfortable after the first use, and so on.

Then you go deeper into very specific category pages of an e-commerce site, product pages, and so on. Or even having this kind of middle-of-the-funnel content, like the classic help pages about the product.

So, resolving issues of people who already had your sneakers or had other sneakers of another brand, and want to see if yours have the same problem. Do you think this is the correct way to create a topical map for conquering the so-called topical authority?

Ryan Jones: Yes, absolutely. And I think the way you described it is, in my way of looking at it, quite possibly the best way to do it.

We can have a whole separate discussion about what is happening to the top of the funnel in LLMs and Google’s AI Overview. I still think top-of-the-funnel content matters; it might just matter in a different way. We might not necessarily see clicks from it directly, but we can still see some clicks, and we still do from our own top-of-the-funnel content.

But we also need to make sure that we are the ones informing the discussion so that when LLMs go and do their grounding, we are present in that top-of-the-funnel search journey.

Without going too far into that discussion about LLMs and the whole death of top-of-the-funnel traffic thing, in terms of the actual journey, that is exactly where we want to be. We want to be known as the source that provides all the information someone needs when they are first looking into a topic. We want to make sure that we’re the source of information that people see when researching different tools or different websites to look at, or, like in your sneaker example, the different brands people could use and things like that.

And then, of course, we want to be the place where people go to make that purchase decision and choose to buy from us.

And absolutely, it also helps to have post-purchase content, like help-based content. Because even if we remove the entire SEO aspect from it entirely, it just works from a business standpoint for us to be the source of information across all aspects of the spectrum, from initial information right through to post-purchase, and making sure that people have a great experience with customer support if they need it.

Because that, at the end of the day, is what drives repeat business, and reduces churn if you are a software business. That is where we get into the whole business aspect of SEO. So even if you remove the SEO benefits from it as the first consideration, it still works from a business perspective.

More on internal links, and topic hubs with Dixon Jones

Explore how semantic context, structured data, and internal linking connect into powerful topic hubs, helping search engines and AI systems understand your site and build real topical authority. 

Check out The Search Session with Dixon Jones

Gianluca Fiorelli: Yes, I agree. If your purpose as an SEO is to become the source of truth, for instance, when we talk about LLMs, the source of truth that LLMs trust, the classic effect of E-E-A-T, you cannot be a source of truth if you are not showing expertise and experience also in the informational space. And secondly, I think many people say this and neglect informational content for one reason: they chase consensus because they have this idea of consensus.

The problem is that consensus, especially in LLMs, does not even make you cited, because it is already in the training data. So when something is already in the training data, LLMs do not mention anything. Maybe they are using your content but not mentioning you.

And the same is true if we think about classic Google Search. We know that Google explicitly says that in relation to consensus and information gain. If you are not bringing something new or a different perspective, even if you are talking about consensus, you will not stand out.

For instance, in Your Money Your Life topics, Google penalizes websites that go against consensus. But if you don’t try to point to this information gain value, obviously that informational content is expensive to produce and time-consuming. Even if you are using AI, you still need a lot of human involvement to check if the content is valid or not.

So people start saying, “Okay, it is not bringing traffic, it is not bringing impressions, LLMs are using it without mentioning us, so let’s not do it.” I think that is a big misunderstanding of the need for informational content.

Defining Success: SEO Testing Metrics That Matter

Gianluca Fiorelli: I have one last question, so let’s return to SEOtesting. For a tool called SEOTesting, obviously we immediately think about the most important metric being how well we appear in the search results pages. If, for instance, with some tests we are appearing not only as a search snippet but also in search features and so on, how much traffic are we obtaining thanks to this test, and so on? But shouldn’t conversion and micro-conversion be the real metrics to measure?

Ryan Jones: Yes, absolutely. That should always be the number one thing. You can look at it from a sort of downstream point of view. If you have a landing page that converts really well, you can say that if you increase traffic to that page, you will eventually see more revenue.

But I do think it needs to be part of the discussion from the very start. If we run a signup page, for instance, and see a massive increase in clicks, but at the same time form fills like a drop off a click, I do not think we can say that was a successful SEO test.

At the end of the day, yes, we got more traffic, but conversions tanked, right? So we cannot really say that is a success.

In the same way, we have seen examples of SEO tests where metrics have gone down slightly. We might rank for fewer queries and get fewer clicks, but we changed the narrative and started ranking for more relevant queries tied to the buying decision, and we have seen an increase in revenue from that.

So from that standpoint, you could look at it from a purely SEO metric and say, “Okay, we did not see a click increase from this, so it is a negative test.” “But hold on, we did see more people actually buy products or sign up to a trial or something.” So from my experience, I can call that a successful test, because at the end of the day, it made the business more money.

So I do think we need to keep that in the narrative and say, “Okay, here are the tests that we have run, here are the SEO results we have seen, but also here are the revenue results we have seen.”

And then from there, going back to what we talked about at the start, we can use that to inform strategy going forward. Because we know that these activities we have done over the past month or quarter, these ones at the top are the ones that bring us the most revenue. So let’s double down on that, because that is what brings us the most revenue and that is what is most important for us.

At the end of the day, most of us as marketers are results-driven. And the results we should be driving are business outcomes, revenue, profit, and things like that.

So that is where I stand on it. I think we need to focus on revenue-based reporting, revenue-focused reporting, and things like that, and also change the narrative of talking about tests being fails or wins, and instead look at it from a revenue point of view.

Gianluca Fiorelli: Okay, yes, I totally agree. And this is also a way to win your client at the end of the day. 

Beyond the Tool: Football, Books, and Life Outside SEO

Gianluca Fiorelli: So let’s stop talking about testing, SEO, and LLMs. Let’s talk a little more about you. I see many books behind you. I think books usually explain a person better than other things. What kind of books do you have behind you?

Ryan Jones: There is a range of them. There are a couple of SEO and marketing books in there. I have the recently released SEO in 2026, which I happen to be featured in, and I am incredibly proud of that.

There are also books on the Pyramid Principle, which helps with my public speaking, so explaining complex topics to people who might not be as experienced. And then there are some sales books as well, because sales help marketing and things like that.

There are some autobiographies in there, and if anyone listening is UK-based, I am a big Derby County fan, so there are a couple of those in there as well. I am a huge football fan.

Gianluca Fiorelli: Okay, okay, okay.

Ryan Jones: There’s a big range of books that I read. 

Gianluca Fiorelli: Yes, but that is interesting, because I am always a bit skeptical of people who only read marketing books. I mean, you do marketing all day, so why spend your free time reading marketing books as well?

I understand that you have to learn and improve, but sometimes reading something totally different, like a historical novel, a sci-fi novel, or even a cookbook, can be a way to free your mind and then absorb work-related things differently.

Another question about you: what do you do when you are not testing?

Ryan Jones: So there is a range of things I do outside of work. As I said, I am a big sports fan, but this does not just cover football or soccer, if you are listening from the US. That would be my number one sport. I play that two to three times a week with friends. I am involved in a charity football team…

Gianluca Fiorelli: In what position do you play?

Ryan Jones: Center back. I am at the heart of the defense. It helps because I am tall, so for corner kicks and things like that, I am one of the key defenders.

I play in a charity football team, so we raise money for different great causes, and we play other charity teams as well.

But yes, I also love going to different sports events: ice hockey and American football. I have been to games when they travel over to Tottenham Stadium in London.

And then just general things. My partner lives in Oxfordshire, so I travel there a lot, and we go on long walks in the countryside with the dog.

Gianluca Fiorelli: Cool. Well, thank you, Ryan. It was a real pleasure having you. I think you shared very interesting things with our audience at The Search Session, and I hope to see you soon in real life and maybe again here on the show.

Ryan Jones: Absolutely! Thank you very much for having me on. I hope to see you in person again soon, and I would be happy to come back anytime.

Gianluca Fiorelli: And thanks to all of you for being so patient and staying with us until the end of the episode. And remember, if you are not subscribed, subscribe and turn on the bell to be notified about new episodes of The Search Session. Thank you and bye-bye.

Gianluca Fiorelli

Podcast Host

Gianluca Fiorelli

With almost 20 years of experience in web marketing, Gianluca Fiorelli is a Strategic and International SEO Consultant who helps businesses improve their visibility and performance on organic search. Gianluca collaborated with clients from various industries and regions, such as Glassdoor, Idealista, Rastreator.com, Outsystems, Chess.com, SIXT Ride, Vegetables by Bayer, Visit California, Gamepix, James Edition and many others.

A very active member of the SEO community, Gianluca daily shares his insights and best practices on SEO, content, Search marketing strategy and the evolution of Search on social media channels such as X, Bluesky and LinkedIn and through the blog on his website: IloveSEO.net.

Share on social media
Share on social media

stay in the loop

Subscribe for more inspiration.