Screaming Frog Tutorial: Run a Technical SEO Audit
Screaming Frog SEO Spider is a desktop-based website crawler that has become the go-to tool for technical SEO audits. Unlike cloud-based platforms, it runs locally on your computer, giving you complete control over crawl configuration and the ability to analyse sites of any size without ongoing subscription costs for the free version.
For Singapore businesses, technical SEO is often the difference between a site that ranks well and one that languishes despite good content. Issues like broken links, redirect chains, duplicate content, and missing meta tags silently undermine your search performance. Screaming Frog identifies these problems quickly and systematically, whether you are auditing a five-page corporate site or a fifty-thousand-page e-commerce catalogue.
This Screaming Frog tutorial covers everything you need to perform a thorough technical SEO audit: setting up your crawl, finding broken links, tracing redirect chains, detecting duplicate and thin content, identifying missing meta tags, and exporting actionable data for your team. If you are new to technical SEO or want a professional team to handle the audit process, our SEO services include comprehensive technical audits using tools like Screaming Frog.
Setting Up Your First Crawl
Before you begin crawling, you need to download and install Screaming Frog SEO Spider. The tool is available for Windows, macOS, and Linux. The free version crawls up to 500 URLs, which is sufficient for small Singapore business websites. The paid licence removes this limit and unlocks advanced features like custom extraction, Google Analytics integration, and scheduled crawling.
To configure and launch your first crawl:
- Open Screaming Frog and enter your website URL in the search bar at the top of the application.
- Before clicking “Start,” configure your crawl settings under Configuration > Spider. Key settings to review include:
- Crawl Mode: Keep the default “Spider” mode, which follows links from your starting URL. Use “List” mode if you want to crawl a specific set of URLs.
- Limit Crawl Depth: Leave this unchecked for a full audit, or set a limit if you only want to crawl pages within a certain number of clicks from the homepage.
- Check External Links: Enable this to find broken outbound links (links to other websites that return errors).
- Crawl All Subdomains: Enable this if your site uses subdomains that should be included in the audit.
- Under Configuration > Robots.txt, decide whether to obey or ignore your robots.txt file. For a comprehensive audit, you may want to ignore it to check pages that might be accidentally blocked.
- Set the crawl speed under Configuration > Speed. Use a maximum of 5 URLs per second for your own site to avoid overloading your server. For competitor analysis, use a lower speed to be respectful of their resources.
- Click “Start” to begin the crawl.
The crawl progress bar shows URLs discovered, crawled, and remaining. For a typical Singapore SME website with 100 to 500 pages, the crawl should complete within a few minutes. Larger sites can take longer — enterprise-level crawls of tens of thousands of pages may take an hour or more depending on server response times and your internet connection.
Once the crawl completes, you are ready to analyse the data across the tabs and reports described in the following sections.
Finding and Fixing Broken Links
Broken links harm user experience and waste your crawl budget, which is the number of pages search engine bots will crawl on your site within a given period. They also prevent link equity from flowing through your site’s internal link structure. Screaming Frog makes it straightforward to find every broken link on your site.
To identify broken links:
- Click the “Response Codes” tab in the main crawl window.
- Use the filter dropdown at the top and select “Client Error (4xx)” to see all pages returning 404 and other 4xx status codes.
- For each broken URL, click on it and then check the “Inlinks” tab at the bottom of the screen. This shows you exactly which pages on your site link to the broken URL.
You will also want to check for broken external links — links from your site pointing to other websites that no longer exist. To find these, go to the “Response Codes” tab, filter by “Client Error (4xx),” and look for URLs on external domains.
Common causes of broken links on Singapore business sites include pages removed during a website redesign, product or service pages that have been discontinued, blog posts with outdated outbound links to resources that no longer exist, and URLs changed without implementing proper redirects.
For each broken link, you have several options: create a 301 redirect from the broken URL to a relevant existing page, update the internal links to point to the correct destination, or remove the link entirely if no suitable replacement exists. If broken links are the result of a recent site migration, our web design team can help implement a comprehensive redirect strategy.
Identifying Redirect Chains and Loops
A redirect chain occurs when a URL redirects to another URL, which then redirects to yet another URL before reaching the final destination. Each hop in the chain adds latency, wastes crawl budget, and dilutes the link equity passed through the redirect. A redirect loop is even worse — it occurs when redirects create a circular path that never reaches a final destination.
To find redirect chains and loops in Screaming Frog:
- Go to the “Response Codes” tab and filter by “Redirection (3xx).”
- Look at the “Redirect Chain” column. Any URL with a chain of more than one redirect is flagged here.
- Click on any redirected URL and check the “Redirect Chains” report at the bottom to see the complete chain from start to finish.
- Alternatively, use Reports > Redirects > Redirect Chains from the top menu for a dedicated report.
Redirect chains commonly build up over time through successive website redesigns. For example, a Singapore company might have redesigned their site in 2020, redirecting /services.html to /our-services/. Then in 2023, they restructured again and redirected /our-services/ to /services/digital-marketing/. Now the original URL passes through two redirects before reaching the destination. The fix is to update the first redirect to point directly to the final URL, eliminating the intermediate hop.
Redirect loops are typically the result of misconfiguration — for example, URL A redirects to URL B, and URL B redirects back to URL A. These render the pages completely inaccessible and must be fixed immediately. Check for common causes such as conflicting redirects in your .htaccess file, WordPress plugin conflicts, or CDN-level redirect rules that clash with server-level rules.
Best practice is to ensure every redirect chain is reduced to a single hop (one redirect from the old URL to the final destination). After fixing redirect chains, re-crawl your site to verify the issues are resolved.
Detecting Duplicate Content Issues
Duplicate content confuses search engines about which version of a page to index and rank, potentially splitting link equity across multiple URLs and leading to lower rankings for all versions. Screaming Frog offers several ways to identify duplicate content on your site.
To find duplicate content:
- Go to the “Content” tab and use the filter dropdown. Key filters include:
- Exact Duplicates: Pages with identical content. These are flagged when two or more URLs serve the same HTML body content.
- Near Duplicates: Pages with highly similar content (available in the paid version). These are common on sites with product variations or location-specific pages that share most of the same text.
- Check the “URL” tab and filter by “Duplicate” to find URLs that are accessible through multiple paths — for example, with and without trailing slashes, or with and without “www.”
- Review the “Canonicals” column in the main crawl view. Pages with missing or incorrect canonical tags are particularly vulnerable to duplicate content issues.
Common duplicate content scenarios for Singapore businesses include HTTP and HTTPS versions of pages both being accessible, www and non-www versions resolving to different URLs, product pages accessible through multiple category paths (common on e-commerce sites), paginated content without proper rel=”next/prev” or canonical implementation, and URL parameters creating additional indexable versions of the same page.
Solutions depend on the type of duplication. Implement canonical tags to tell search engines which version is the definitive one. Set up 301 redirects from non-preferred URL versions to the canonical version. Use the robots.txt file or meta robots noindex tags to prevent duplicate pages from being indexed. For parameter-based duplicates, configure URL parameter handling in Google Search Console. Having a solid content marketing strategy also helps ensure each page on your site offers unique value.
Spotting Thin Content Pages
Thin content refers to pages with very little substantive text — typically under 200 to 300 words of unique content. While word count alone does not determine quality, pages with very little content often fail to provide enough value to rank well and can drag down the overall quality signals of your site.
To identify thin content in Screaming Frog:
- Go to the “Internal” tab, which displays all crawled internal URLs.
- Click the “Word Count” column header to sort pages by word count in ascending order. Pages with zero or very low word counts appear at the top.
- Filter by HTML pages only (using the filter dropdown or by excluding images, CSS, JavaScript, and other resource types) to focus on content pages.
- Review the pages with the lowest word counts and assess whether they genuinely serve a purpose or need improvement.
Not every page with a low word count is a problem. Contact pages, thank-you pages, and functional pages like login screens naturally have little text. The concern is with pages intended to rank for search queries — landing pages, blog posts, product descriptions, and service pages — that lack sufficient content to satisfy user intent.
For Singapore businesses, thin content is particularly common in these scenarios: product category pages that only list items without any descriptive text, service pages that are essentially placeholders with a heading and a phone number, auto-generated pages from CMS systems or directory plugins, and blog posts that were published as brief news updates with only a sentence or two.
Address thin content by either expanding the page with genuinely useful information, consolidating multiple thin pages into one comprehensive page, or applying a noindex tag if the page serves a functional purpose but should not appear in search results. The goal is to ensure every indexed page on your site provides enough value to justify its presence in Google’s index.
Auditing Missing and Problematic Meta Tags
Title tags and meta descriptions are critical on-page SEO elements. The title tag is a direct ranking factor, and the meta description, while not a ranking factor, heavily influences click-through rates from search results. Screaming Frog makes it easy to audit both across your entire site in seconds.
To audit title tags:
- Click the “Page Titles” tab in the main crawl window.
- Use the filter dropdown to identify issues:
- Missing: Pages with no title tag at all — the most critical issue.
- Duplicate: Multiple pages sharing the same title tag, which creates confusion for search engines.
- Over 60 Characters: Titles that may be truncated in search results.
- Below 30 Characters: Titles that are likely too short to be descriptive.
- Same as H1: While not inherently problematic, having unique title tags and H1s gives you the opportunity to target more keyword variations.
To audit meta descriptions:
- Click the “Meta Description” tab.
- Filter by “Missing” to find pages without meta descriptions — these pages will have Google auto-generate a snippet, which may not be optimal.
- Filter by “Duplicate” to find pages sharing the same meta description.
- Filter by “Over 155 Characters” to find descriptions that will be truncated in search results.
Beyond title tags and meta descriptions, Screaming Frog also audits H1 tags (under the “H1” tab), H2 tags, image alt text (under the “Images” tab, filter by “Missing Alt Text”), canonical tags, meta robots tags, and hreflang attributes. Each of these has its own tab or can be accessed through the relevant section of the crawl data.
For Singapore businesses targeting multiple languages, the hreflang audit is particularly important. Check that every page with an English version correctly references its Chinese, Malay, or Tamil counterparts and vice versa. Incorrect hreflang implementation can cause the wrong language version to appear in search results, frustrating users and harming your click-through rates. Our SEO team regularly audits hreflang tags as part of our technical SEO service.
Exporting and Acting on Your Data
Screaming Frog’s crawl data is only valuable if you turn it into action. The tool provides extensive export options that make it easy to share findings with developers, content creators, and stakeholders.
To export data from Screaming Frog:
- Navigate to the tab containing the data you want to export (for example, “Response Codes” filtered to “Client Error 4xx”).
- Click “Export” in the top right to save the filtered data as a CSV, Excel, or Google Sheets file.
- For a comprehensive site-wide report, use File > Save As to save the entire crawl, which can be reopened later for further analysis.
- Use Reports from the top menu to generate specific audit reports. Useful reports include:
- Redirect Chains: A complete list of all redirect chains found.
- Crawl Path: Shows the click depth of every page from the homepage.
- Orphan Pages: Pages not linked from any other page on your site (requires Google Analytics or Search Console integration).
- Sitemap Errors: Issues with pages referenced in your XML sitemap.
When creating your action plan from the exported data, prioritise issues by impact. Start with broken links on high-traffic pages, redirect chains affecting important landing pages, and missing title tags on pages targeting valuable keywords. Then address duplicate content, thin content, and missing meta descriptions as secondary priorities.
For ongoing monitoring, the paid version of Screaming Frog allows you to schedule crawls and compare them against previous crawls. This crawl comparison feature highlights new issues introduced since the last audit, making it efficient to catch problems early. Export comparison reports and share them with your development team as part of a regular quality assurance process.
Singapore businesses that invest in monthly technical audits consistently outperform competitors who neglect the technical foundation of their sites. Pair your Screaming Frog findings with data from Google Ads campaigns to understand which technical fixes will have the greatest impact on your highest-value landing pages. For businesses also running social campaigns, our social media marketing services ensure a consistent experience across all channels.
자주 묻는 질문
Is Screaming Frog free?
Screaming Frog offers a free version that crawls up to 500 URLs per crawl. This is sufficient for small websites. The paid licence costs a yearly fee and removes the URL limit, adds features like custom extraction, JavaScript rendering, crawl comparison, and integrations with Google Analytics, Search Console, and PageSpeed Insights. For most serious SEO work, the paid version is worth the investment.
How long does a Screaming Frog crawl take?
Crawl time depends on the number of pages, server response speed, and your configured crawl rate. A 500-page site typically completes in two to five minutes. A 10,000-page site may take 15 to 30 minutes. Very large sites with 100,000 or more pages can take several hours. You can adjust the crawl speed under Configuration > Speed to balance thoroughness with server impact.
Can Screaming Frog crawl JavaScript-rendered pages?
Yes, the paid version of Screaming Frog includes JavaScript rendering capabilities. Enable this under Configuration > Spider > Rendering. It uses an embedded Chromium browser to render pages, allowing it to crawl content loaded via JavaScript frameworks like React, Angular, and Vue. This is essential for modern websites where content is dynamically loaded on the client side.
What is the difference between Screaming Frog and cloud-based SEO audit tools?
Screaming Frog runs locally on your computer, meaning your data stays private and there are no ongoing crawl limits based on subscription tier (with the paid licence). Cloud-based tools like SEMrush Site Audit and Ahrefs Site Audit run on their servers, offering convenience and scheduled crawls without requiring local installation. Screaming Frog offers deeper customisation and more granular data, while cloud tools provide a simpler interface and automatic scheduling. Many professionals use Screaming Frog for deep audits and cloud tools for ongoing monitoring.
How often should I run a technical SEO audit with Screaming Frog?
Run a comprehensive audit at least once a month. Additionally, crawl your site after any significant changes such as a website redesign, CMS migration, URL structure changes, or large-scale content updates. If you are actively making SEO improvements, weekly crawls help you verify that fixes have been implemented correctly and no new issues have been introduced.
Can Screaming Frog generate an XML sitemap?
Yes, Screaming Frog can generate XML sitemaps based on your crawl data. After completing a crawl, go to Sitemaps > XML Sitemap from the top menu. You can customise which pages to include, set priority values and change frequency attributes, and export the sitemap file. You can also compare your existing sitemap against the crawl results to find pages in your sitemap that return errors or pages missing from your sitemap that should be included.



