
Webinar! The AI Search Toolkit: Audit Tactics, Tools & Techniques
Published June 19, 2025
July's webinar was an extra-special and super-topical workshop on auditing your brand for AI search visibility.
Lily Grozeva, Partner & Head of SEO at VertoDigital, walked attendees through AI Search auditing. Lily Grozeva is an SEO and visibility strategist with 20 years’ experience in B2B and tech. She helps brands transition from being Google-centric to having a multiplatform visibility strategy built for AI-driven search.
In this session, she breaks down what AI Search really means in 2025 and how to audit your brand’s visibility across Google’s AI Overviews, ChatGPT, and Perplexity. She shares the exact methods, signals, and tools she uses when running AI Search audits for B2B and tech clients - and helps you see for yourself if SEO still equals visibility.
Watch the recording
You can view Lily's presentation slides here.
Subscribe to the Sitebulb YouTube channel! You can also sign up here to be the first to find out about upcoming webinars.
Webinar transcript
Jojo Furnival:
Hi everyone. We are back with a special workshop webinar today because search has been invaded by generative AI and everybody wants to know what, if anything, they should be changing about their brand visibility auditing workflows. So we've got the fantastic Lily Grozeva, I hope I said that okay, to share her AI search auditing process with us. My name is Jojo. I'm the marketing manager at Sitebulb. Patrick who is the co-founder and CEO. He is in the chat, so feel free to say hi to him, Lily, and each other there. If you aren't familiar with Sitebulb, it's a website auditing and crawling tool. We're the only crawler on the market to offer desktop based software and a web-based or cloud product. So that means our tool can help every SEO from solo freelancers all the way up to global enterprise brands. You can try our desktop crawler for free, see how it's different for yourself.
Just head to sitebulb.com forward slash download to access the free trial. Sitebulb Cloud is in essence the same tool but accessible via your web browser. It's way more affordable than the other enterprise scale crawlers out there. Sorry, not sorry. If you want to find out more about that or book a demo, head to sitebulb.com/cloud. Both of those links are pinned in the chat. Final bits of housekeeping and we can start. So the webinar is being recorded and the recording will be emailed out tomorrow. As usual, do not worry if you cannot make it for the whole thing and we will have some time at the end of the webinar for your questions.
So please put them in the Q&A tab, which is next to the chat. Don't put them in the chat box itself. It's easier for us to manage and you can also upvote other people's questions in the Q&A box as well. Okay, let me introduce our guest Lily. She is an SEO and visibility strategist with 20 years experience in B2B and tech. As partner and head of SEO at VertoDigital, she helps brands transition from being Google centric to having a multi-platform visibility strategy built for AI driven search, which frankly is what we all need right now. Welcome Lily. Thank you for giving up your time today and welcome all. Okay, I'm going to hand over to Lily now and I will be back later for the Q&A. Have fun. Be good.
Lily Grozeva:
Good afternoon everyone. You hear me guys? Okay, so I guess you hear me. Good afternoon everyone, and thanks so much for taking the time to spend this afternoon with me. I have spent the last maybe at least six plus months with trying to figure out everything that has been going on with search and its inclination towards AI search and I have managed to put together a couple of tactics, tools, techniques with the idea to help out our clients. So I gathered all this knowledge I have put together is something I can share with the community. So here we are in this webinar with our lovely guest from Sitebulb and we are going to cover this agenda. So I intend to be keep it very short and sweet. We'll go very briefly into what changed in search, what you need to actually do an audit for AI search visibility, what's included in the process I have put together and as Jojo said, we'll have some time with the audit process and then the Q&A.
So first of all, who am I? As Jojo said, I've been doing SEO for quite some time, mostly for US and TU companies in US tech, B2B, and FinTech. I have started the so-called AI journey in SEO in 2021. Back then we are trying to help clients build produce content. So we got involved into the content AI platforms like Jasper and Copy.ai and Writesonic and this is where 2020 threes like the big eruption of ChatGPT found us. So the focus soon had to be changed to the changing search environment mainly first ChatGPT broke out, I think it was end of '22, beginning of '23, and then May '24 I guess Google tried to catch up with the Google AI Overviews, which probably everybody know back then was quite undercooked to say the least. But as of a couple of months ago now both platforms are serving AI driven results in hundreds of countries, tens of languages and a billion, a billion and a half monthly active users.
Shout out to Lily Ray who's covering Google AI Overviews and how they progress pretty handsomely. So if you don't already, please go and follow her on LinkedIn. But what we are left with right now is what I call an environment, a search environment that is multi-platform, answer-centric, and really hard to trust. And with that said, what is even more frustrating is that there are a couple of misunderstandings going on in AI search right now, especially among clients. Their biggest misconception is that the AI search is mostly happening with the LLMs. Actually Google turns out is the place where most of the conversation is happening. They say that sending traffic was a necessary evil so far and obviously the direction they took for not sending so much traffic to publishers and the rest of the web is a direction they're going to keep for the time to come. So that leaves us in a situation where most people are confused, they hear different signals of how urgent AI search visibility is.
They're really frustrated if this is yet another SEO hype or really there's something that's been up. So the easiest thing I found in helping clients and business in general to find their way around is just to do an audit of their current visibility and try to understand based on the results coming from an audit like that, how urgent transition to AI search is what it is that they need to do and how to approach that. So about audit prerequisites, what do you need and what I actually start with before I run anything as AI visibility audit. So a couple of things, as with any SEO project, you ask input from your client about main personas, main use cases, a web crawl with all the text that is public text that lives on the website is a very good idea. If you are only auditing your own website and not a competitor websites, you can take advantage of getting the performance data as well.
With GA4 and the third console integrations a very good explanation of what the client offering is like product services, what their leadership team, who these people are, what they do. These are the basis of your branded prompts and usually clients like every time they also have a list of targeted prompts, both branded and non-branded and if they don't, a lot of the tools out there, the so-called AI analytics tool stack that we all observed that a lot of that companies turned out to be in the last year or so, they also provide that type of information if clients cannot come up with branded and non-branded prompts.
And obviously you can also use synthetic prompts using any of the LLMs, just saying these are the core topics my client is after, these are the personas, the main use cases and the website and you can come up with a couple of synthetic branded and non-branded prompts. Three to five competitors is also something that you should have on the list so that you build a full client input, client input file that you are going to use in different steps of the audit. I'm going to show you, I also put together an example file that you'll be able to download once you have the deck.
So when it comes to tool stack, again for this particular audit, I will be using the Google Search Console or Semrush. In this case, I will be using a brand that I literally haven't heard of before putting this deck together. So I will be using Semrush, but if you have the first party data for the business, you can also get this data from the search console and probably more reliable data and I will be using Sitebulb for the custom crawl extraction of the public text that we are going to need for the different aspects of the audit.
Again, when it comes to AI analytics tooling, I personally am using Rankscale, Peec, and Knowatoa for this particular use cases you see on the right for the readiness core, for content to prompt mapping, for citations, for the ecosystem overview, share a voice and all that type of more AI specific angles of the content and the audit. But there are many other options on the market and I'm sure given the use cases you will find the right vendor for you and the audit process. But before we start, obviously there are a lot of discussion in the industry in the last year more than usual and one of the debates is around is geo optimization, geo auditing similar or the same as SEO auditing? That we know painfully well.
There is a debate about it, everybody has an opinion. I shared the opinion of Will Scott who put it very nicely saying that geo is like SEO if people have been doing SEO the right way in the past 20 years. So this is brilliantly said, but besides that one cannot escape from the feeling that a lot of the tactics and the directions really look like what we know from SEO and what we built as skill sets from back then like PR is coming back, citations, [inaudible 00:13:08] is coming back like ChatGPT search. Content marketing is back, but this time address this AI training data and we are talking about chunks, not pages now. So it's a lot like that but kind of emphasises in the quality more than just doing more in terms of volume to put it that way. Okay, so the process that I follow is of seven steps.
I've tried to break them down here to give you an overview of what I try to cover. This is by no mean extensive. The main idea following this process is more about covering bases that makes sense that we have enough data to work with and that could mean real impact for the client more than deep diving into theories and not exactly overcomplicate but overdo on aspects that we don't really know enough. So this process is a little bit simplified but I think it covers enough to build an audit with recommendations that really makes difference. So first step is covering defining what are the core topics based on the prompts that have been delivered by the client or built from any of the tooling I already mentioned. Then trying to understand how the content on the website actually covers that desired core topics.
Then we try to understand how visible they currently are across the different LLMs and AI Overviews. Then building the entity strength and NLP frequency, I have it in the form of table but you can have it in any other way, that makes sense. This entity strength and NLP frequency helps us understand what more we need to build in terms of content if we are about to position ourselves for this particular prompts. To be honest, technical side of the AI search audits is something that at this point I don't see it as a really crucial step of the recommendations coming out of this origin in general. It's simplified a lot from what we know in traditional SEO, but I will show you what I've included in the technical accessibility in the technical aspect. Citation presence and training data is very interesting step for me personally is it includes a lot of interesting insights I come up with and we will finalise the process with how we benchmark against competitors.
And a couple of suggestions what to start monitoring to make sure you are on top of your visibility in AI. Okay. So as I said, I pick up one company that I never heard of before. Turns out we have very solid NDAs with most of our clients, so I cannot work on them in particular. But here is one very, very suitable example. This is a US tech brand, very small website. I specifically wanted it to be small for educational purposes. I didn't want to crawl for hours and hours on end, not very popular. So we don't have this spikes in visibility based on the authority of the brand alone, B2B SaaS model and from what I get from the key topics, this is something very similar to Otter.ai or Fathom or something or Toolink like that. So this would be the sample project we are going to work with.
Okay, so first step put together the map of targeted prompts to the core topics. What we need to do is we need to understand do we have enough content to match the targeted prompts from the client input file with the website pages. So this is the website. Something's up with the... And we made a custom content extraction using Sitebulb in this case, setting up the web crawl with the custom extraction, all the body text content we want to crawl and extract and when we are done we do an export with CSV file. Then using the client input file and this crawling data and I personally used an LLM, no, sorry, I didn't use an LLM at this point, but you can use if it's a more complex topic-wise.
In this case, what I did to build the core topic map is the Google advanced search operators to actually understand what portions of the website are ranking for this particular prompt or you can use the core topics if you want to drill down on specific pages. The idea is we are building a map that shows us these are the core topics that the client actually wants to cover. These are a breakdown of some of the representative topics prompts per topic, and these are the actual URLs you can work with to actually present the content to the LLM so that they can be selected and used as an answer at least summarised if not directly.
So when we have the URLs, which is the main point of the previous step, when we have the pages that we want to work with, we need to run a page level analysis for each of them. Usually if you take a look at the table, because the prompts usually are very tied together, most of the pages start to overlap like one and the same page start to appear for a few different prompts that are similar but not the same. So when you have the final list of pages that you want to work with, you can come up to a tool similar to rank scale or again you can use an LLM that help out with it and a tool like rank scale can actually help you out understand how the content structure and content or the content chunks, the so-called citations of the content on the page actually could help out that page be used as an AI result.
So what we want to pay attention to in a report like that, the AI search readiness score helps as much as helps us prioritise the pages. Obviously the ones that are with lower readiness score would need more work. So the overall score content is one thing we are paying attention to. The second thing is the content quality and relevance. Rankscale are doing very good job in highlighting what LLMs consider important in that situation. Something I forgot to mention is that when setting up tools like Peec, Rankscale, or Knowatoa, you already set them up against the prompts you are optimising for and the competitors that you have in the client list. So when you say that this page is relevant to user intent, actually there is a specific prompt they have in mind when it comes to this recommendation. So one thing we pay attention to is the content quality and relevance. Second thing important when you look at report like that on a page level would be the authority and trustworthiness.
What I love about this tool is that it gives very specific on-page trustworthiness and authority components for the page like legal compliance or you miss testimonials or you miss company information. Like this is all something that company information for example could be put in the footer to improve the page or testimonials could be put on a stage in the design or anything or very specific page level recommendation that are quite useful. Not immediately, you still need the advice of the design team, but in general it's a good idea to take a close look on this recommendation. So you need to run this reports on a page level and also what I like about this particular tool and this particular type of tooling, again the ones I mentioned before, is that the report is applicable both to the LLM chatbots and for Google AI overview positioning.
Here are the recommendations. One of the option is to analyse each page with Rankscale. The second option is you can actually go and build a custom GPT or Gem in Gemini or Claude's project and actually run a prompt that you made that you can refine and work with. That helps that you ask an LLM for advice, how well optimise a certain page. It's a fascinating results you can get from there. Again, I left an example of a prompt in a file that you can use to actually run this on your own.
Reporting on the visibility, it's probably the most fun people have with these tools. Again, tooling like Rankscale, Peec, and Knowatoa report visibility mostly on synthetic prompts. I think everybody is already aware of that. Ahrefs actually yesterday mentioned that they now have indexes with actual prompt data. I would follow their posts and their updates on that particular information. But as of now, all LLMs as far as I know are using, sorry, all the tools similar to Rankscale, Peec, etc., are using synthetic prompts unless you give them specific prompts to work with. This all means that that type of tooling is not useful for research of what to optimise for. If your client or your boss haven't already given you the prompts you want to appear for, the tooling won't tell you what you need to optimise because the LLM data, the prompt data they have is actually synthetic.
Again, same here for Peec. This is the type of reporting you can expect from that type of tooling. But what's interesting here is about the AI on the AI overview side, you can actually use Semrush to get an idea what to optimise for. The AI analytics tooling won't tell you or won't enable you to do any research but Semrush, I think it was the main report slash then you go to organic reporting and then the sub feature trend report can give you a very good idea of what to optimise for in terms of AI Overviews. If you take a look at the not linking to the main AI overview keywords, you'll see all the low-hanging fruit so to say, obviously you'll need to take a closer look at this almost 3,000 keywords and pick up the ones that you really want to work with, especially mapped to the pages we already saw in the core topic map.
But if you have the AI Overviews as what you have potential for because you appear on the page but you are not the selected AI Overview results, you are not in the AI Overview as a source, then you can work on the second bucket, on the lower bucket here and optimise for that to appear as an AI Overview. Semrush is actually the research tool here that can help out with planning new content.
So step four, this is the entity strength and NLP frequency. Here the task is to build an NLP and entity analysis with ChatGPT and Gemini. The main goal is to understand whether the targeted topics actually reflect in the existing web content. So how to do that? To build the prompt, it's pretty easy, I think. Again, I left an example prompt you can use, but that type of tooling is an NLP, an entity tool so to say. The LLM itself is an NLP and entity tooling. So I would say just use a prompt to ask based on the inputs coming from the website content and the example table, I literally stayed the LLM, use this table as an example table to build me an NLP and entity strength, analyse this based on the input data. And this is the result that you can get, which is quite fascinating.
You already have the entity, this is what the LLM does. It goes through the crawling, the text that you crawled with, let's say Sitebulb. It goes through that identifies what the entities are, identifies how frequent they appear, what is the relevance to the core topics that you already identified in the client input file, and then it gives you content opportunity or gap suggestions. Many people I guess will approach this directly, try to immediately start thinking about producing content about that. I would advise with people who understand the offering, well people who understand the website well and writing content as well and then decide on what exactly we need to add on the website and in what format before saying that this is our new content plan.
So about step five, the technical readiness. Again, as I mentioned from my perspective, unless you are a huge publisher website that want to approach, I want to approach LLMs and Google making direct deals Reddit style. I don't see a reason why a small medium or even a little bigger business would decide to keep the content hidden from LLMs or Google and disallowing AI bots for crawling actually. But anyway, as part of any audit, it's good idea just to check if somebody before you had other ideas.
One tool that can help out with checking on AI bot accessibility is Knowatoa.com, very, very sweet Canadian guys that build these platforms. What they do is they test if the content of the website is accessible for certain bots. Here are 24 bots that they have, 25 probably, 25 bots that they tested against. Most of them are the AI bots. There are some others in the pictures as well like the [inaudible 00:31:15] portal, platforms like that. But nevertheless, this is what you can get. It's good to do it just to make sure that, again, somebody before you haven't decide to limit accessibility for AI bots for some reason.
Something else that came up like a novelty is the so-called llms.txt file. In this particular case for Avoma.com, they don't have that file right there. I have been following another discussion on the subject, do we need that file? Aren't already AI bots smart enough to find their way around without this file? Et cetera, et cetera. I would say that right now there are many things that are really black box when it comes to how AI bots access and consume and interpret information. If it's something that is not hurting the visibility and takes like five minutes to do it or something really insignificant in terms of resource, I would say just go and do it rather than debate whether we need it or don't need it. We don't have enough field data right now to decide if it's really impactful or not.
And if you decide to go that road, Writesonic have a very, very useful lms.txt generator you can use to upload. It looks something like this. And same for Schema.org, same conversation, same debate. Do we need to do it or not do it? Aren't AI bots smart enough to recognise the different aspects of the content without using schema? First of all, it's good idea to audit the Schema if it's properly set up in the first place, then in some cases context can become very complex and defining entities and defining and using Schema can help quite a lot. So again, if it's not too much trouble, I would audit if it's set up properly and also implement it.
So six, we are approaching the end of the process. Step six, citation presence and training data inclusion. For me personally, it's a fascinating step because I see a lot of insight and data about whether the users are actually spending a lot of time and where are the opportunities to really optimise the presence of in OMS and AI Overviews besides the content chunks that the recommendations provided by Rankscale right here, we already know that the actual training sets are not public and we don't have an actual idea of what that data looks like, but we can make a pretty decent idea of the whole picture looking at the citations.
Again, I will emphasise on decent. So one of the tools that is doing really good job when it comes to showing up the sources and the citations from the AI analytics tool stack, this is Peec here. What is really valuable for me personally is that I not only take a look at how my client is doing, but also it's always in comparison with the visibility of their own competitors. All other websites actually that are also using whose content is also used as citations for these prompts.
Another way for you to check on the sources that are used for building an answer for the prompts you are targeting is, for example, Perplexity. They provide a breakdown of the sources used to build up an answer. So I would definitely use that. Gemini also can show a breakdown of the thinking process of what sources are included in their trying to build up the answer to show you. And finally number seven is the competitor benchmarking. Here the main idea is to set up competitor visibility reports that are going to help you follow the visibility change over time. So the continuous monitoring using AI analytics tools. First of all, all AI tooling support analysis against competitors. It shows you the mentions you are getting as well as competitors are gaining for certain prompts.
And also the main benchmarking KPIs that I would set up in continuous monitoring using AI analytics would be the visibility score, which is an overall, an Semrush has something similar in their projects and also the mentions. The mentions are probably becoming the most important currency so to say when we measure success of that type of campaigns. Another very interesting conversation and debate to follow would be with the fall of clicks and the changing aspects of the changing aspects of the components of the ranking factors. So to say what would be an interesting measure of success when we are doing campaigns like that? So measurement and reporting when it comes to AI search is also a very interesting conversation to follow.
Here is another screenshot, this is another interesting report you can get from that type of tool like Peec is it's not just a visibility card of how you are doing and how your competitors are doing, but they also includes other players in the field that you might have missed or that might miss in the traditional SEO rankings altogether for whatever reason. Here is another one. Pretty much everybody that is from the AI analytics tool stack, they're also reporting on the visibility and can provide continuous monitoring, which I think is the most valuable information coming out of that tooling. Oops. So a couple of things that I learned now that we went through the whole process with all the seven steps is that in reality, unless age graphs are onto something, no one knows what people are actually typing or saying to an LLM yet and know what data is included in the training model.
So be ready to do a lot of manual checkups. And I have noticed and also there additional circumstances to complicate the whole process, like extended memory, overpersonalization of the prompts people were using in that type of platforms as well as the emergence of voice in the prompting. People were talking to the LLMs much more than before than they were using search engines. So this will change the whole input as we know it and that we need to monitor, analyse, and try to optimise for. Also, when the situation is that there is a lot of novelty in the field as it is right now, I would be more open to use techniques like let's put LLMS.txt file or do a schema.org implementation more from and try to stay away from... Let's do it just because we want to do something without knowing if it's really the right thing to do, if it's not hurting the website because people usually in that type of situation try to argue and debate and wasting time on things that if they just simply do it, if it's not hurtful, it'll show over time if it was impactful or no.
And also something else that somebody from LinkedIn suggested a couple of days ago. It's equally informative when doing that type of audits to do an audit like that for you, but also to do pretty much the same thing for your competitors as well. You can get a lot of ideas around how they structure their content. Obviously where they show up in citation is hugely informative and you can take advantage of that and definitely use an AI analytics tool that follows your change in AI visibility in comparison to theirs as benchmark is very informative as well. And that's it.
Jojo Furnival:
Am I on the stage? I hope people can hear me. That was awesome. That was really, really, really helpful. Thank you so much, Lily. It doesn't look as though we've got any attendee questions this time. I think everyone is busy digesting all of that juicy AI search auditing information so we can finish early. Woohoo, just digesting. Yeah, that's what we're saying in the chat. I know you've been busy concentrating on your slides so you won't have seen. There's been lots of applause emojis, lots of heart emojis and lots of stuff in the chat. Oh, my word. We've just got a question in. Given the vast array of tools available, what are your top recommendations?
Lily Grozeva:
Literally worked on the slide. I think it is definitely double-digit maybe 15 companies since the beginning of the year. They all have different approach. What I have noticed is most of those people are actually software developers became people building software for marketers. Only a couple of them, I'm not mentioning names, but only a couple of them were actually marketing people that understood what we really need. When we are using a tool like that, I would say that it's currently very useful that there are so many tools to choose from because you can build a process for your custom set of clients that you can use a tool from different angles. For example, as I've shown on one of the slides, I'm using Rankscale to give me the overall score and also for these very insightful recommendations of home page optimizations of the content chunks.
I'm using Peec because of the citation and the breakdown in the visibility of the whole industry when it comes to the citations. I'm using Knowatoa on the technical side and also they have a very good breakdown of how this different artificial synthetic prompts fit in a marketing funnel. So they're also good in different things. So if you have a process of your own or decide to use mine and just take a look at the use cases, you could easily find at least a couple of suggestions for tools for each use case from different angle you want to cover in your audit.
Jojo Furnival:
Awesome. Thank you so much, Lily. Please join me, everybody, in saying a huge thank you to Lily for generously giving up her time and so much valuable expertise and workflow, process, tool stack, everything. That was fantastic. Yep. I'll be emailing out the recording tomorrow to everyone who registered. Don't worry if you missed the start. Next month's webinar is a brand new module in our free JavaScript SEO training course. So if you've done it already, Tori and Sam are going to be back to teach you all about the impact of JavaScript on brand visibility in the era of AI search and what to do about it. So the links for that are in the chat and I'm just checking... Oh my goodness. We actually do have another question and seeing as we've got time, we can potentially answer it, Lily, if you're game. How educated... Yeah, go on. Okay, so how educated, let me put it on the stage, are your clients about all of this? I feel like SEOs need to be the educator on the topic before doing the work.
Lily Grozeva:
Oh, this is a very good question. Actually, there are people all ends of the spectrum and I'm emphasising on ends. There are clients who are very... Most of them tech companies, so they're inclined to be more technical. They do understand SEO somewhat, but unfortunately they're the ones reading the AI search hype news and are all game just fire and flames, let's go, let's do it, et cetera. And they need to be tamed a little bit. Like you don't really need, let's start with an audit. Let's see if you actually are so in a hurry, you think you are. So this is one end of the spectrum. They know about SEO. Most of them have internal teams. Most of them have please digital marketing teams. And there are people who are decided to play the Strauss for now, nothing is happening.
SEO is SEO, let's talk Google. I don't want to get bothered. And to be honest, the B2B SaaS niche I'm working with, these are smaller websites, smaller business websites, the huge spikes that scare the industry coming from the publisher websites that most of the SEOs report on are not so huge in B2B. We don't see them that much. And this is a very good excuse for that type of client to say, "Okay, let's focus on Google, just leave this AI stuff alone." And even AI search audit doesn't sound timely for them. They think there is enough time, they don't even want to know their situation right now, which I think is the healthy thing to do. It's up to you to decide if you're going to move forward with it and go and execute on the recommendation. But at least you need to understand if you're there for your core topics and if your competitors are, if there is an opportunity or something you need to work on on your content to actually help yourself before you execute this six month content calendar until the end of 2025 with blog posts.
Jojo Furnival:
Yeah, okay. I think there are no more sneaky questions.
Lily Grozeva:
They can always reach out in all the other ways.
Jojo Furnival:
Your final slide, is that a QR code to your LinkedIn?
Lily Grozeva:
Oh yeah. Hold on. I forgot about it. Yeah, you can obviously reach out to me on LinkedIn, ask any question on the templates that I shared. There are links for some of the steps with templates and prompts I've built that you can use. Any questions or data in general, please let them come in.
Jojo Furnival:
We do. This is hilarious. I'm loving how I've already done my outro and there are more questions coming in. So do you want one more? Let's say it's one more and then no more questions. So Chris says, Chris Lever, "LLM traffic is low..." Let me put it on the stage. "LLM traffic is low and I hear recommendations for restructuring key organic traffic driving pages. However, I don't hear anyone talking about the risk to organic traffic while that content is reevaluated. Do you agree there is a risk?"
Lily Grozeva:
This is a very good question. LLM traffic is really low and from what I read, the expectations are that it'll be three years until the ChatGPT and LLM traffic, whatever the GPT is in three years, will meet Google in terms of traffic levels if it happens at all, we don't know, obviously. But the thing is that the most of the AI search action is not happening on the LLM side and all the process I showed and the tooling that you can use to actually break down how successful you can be is impactful for LLMs as much as for AI Overviews and AI Overviews are here already. I think that 25 or 30% of all the traffic, all the search demand share, which is huge number. So this is not about LLMs. This was an AI search audit for AI search, not LLMs. LLMs are still very small, I agree. Very small.
Jojo Furnival:
Okay. All right. We are going to wrap up now. That was fantastic. Thanks guys. Do connect with Lily on LinkedIn and yeah, if more of these sneaky questions are coming to you late in the day, in the middle of the night, no, don't do it then. But yeah. Okay. Thank you. Thank you very much everybody, and we will see you on the next one.
Lily Grozeva:
Thanks, everyone.

Jojo is Marketing Manager at Sitebulb. She has 15 years' experience in content and SEO, with 10 of those agency-side. Jojo works closely with the SEO community, collaborating on webinars, articles, and training content that helps to upskill SEOs.
When Jojo isn’t wrestling with content, you can find her trudging through fields with her King Charles Cavalier.
Articles for every stage in your SEO journey. Jump on board.
Related Articles



Sitebulb Desktop
Find, fix and communicate technical issues with easy visuals, in-depth insights, & prioritized recommendations across 300+ SEO issues.
- Ideal for SEO professionals, consultants & marketing agencies.
Try our fully featured 14 day trial. No credit card required.
Try Sitebulb for free
Sitebulb Cloud
Get all the capability of Sitebulb Desktop, accessible via your web browser. Crawl at scale without project, crawl credit, or machine limits.
- Perfect for collaboration, remote teams & extreme scale.
If you’re using another cloud crawler, you will definitely save money with Sitebulb.
Explore Sitebulb Cloud