If you’ve been using Excel or Google Sheets for SEO, you’ve probably built (or copied) formulas you can barely read. Extracting a domain from a URL takes a wall of nested functions. Counting words in a cell is a trick involving LEN, TRIM, and SUBSTITUTE. Nobody remembers how they work. You just pray they don’t break.
Google Apps Script changes the game. You write a function once in plain JavaScript, and your entire team uses it like any native formula. No extensions, no add-ons. Just =DOMAIN(A2) instead of a formula that looks like it was written by a cat walking across a keyboard.
Here are 16 custom functions that will make you wonder why you ever did it the old way.
1. DOMAIN | Extract the domain from a URL
The spreadsheet way:
=MID(SUBSTITUTE(A2,"www.",""),SEARCH(":",A2)+3,SEARCH("/",SUBSTITUTE(A2,"www.",""),9)-SEARCH(":",A2)-3)
Good luck explaining that to anyone. And it breaks if the URL has no trailing slash.
The Apps Script way:
/**
* Extracts the domain from a URL (without www)
* @param {string} url The full URL
* @return {string} The domain
* @customfunction
*/
function DOMAIN(url) {
var match = url.toString().match(/^https?:\/\/([^/?#]+)/);
if (!match) return 'Invalid URL';
return match[1].replace(/^www\./, '');
} Type =DOMAIN(A2) and you’re done. It handles every edge case the spreadsheet formula doesn’t.
2. URLPATH | Extract the path from a URL
The spreadsheet way:
=RIGHT(A2,LEN(A2)-FIND("/",A2,FIND("/",A2)+2)+1)
The Apps Script way:
/**
* Extracts the path from a URL
* @param {string} url The full URL
* @return {string} The URL path
* @customfunction
*/
function URLPATH(url) {
var match = url.toString().match(/^https?:\/\/[^/?#]+(\/[^?#]*)?/);
if (!match) return 'Invalid URL';
return match[1] || '/';
} =URLPATH("https://example.com/blog/my-post/?ref=twitter") returns /blog/my-post/. Clean and predictable.
3. GETPARAM | Extract a query parameter value
The spreadsheet way:
There’s no clean way to do this in a single formula. You’d need a chain of MID, FIND, SEARCH, and LEN functions, and it gets worse with multiple parameters. Most people give up and do it manually.
The Apps Script way:
/**
* Extracts a query parameter value from a URL
* @param {string} url The full URL
* @param {string} param The parameter name
* @return {string} The parameter value
* @customfunction
*/
function GETPARAM(url, param) {
var qs = url.toString().split('?')[1];
if (!qs) return '';
var regex = new RegExp('(?:^|&)' + param + '=([^&]*)');
var match = qs.split('#')[0].match(regex);
return match ? decodeURIComponent(match[1]) : '';
} =GETPARAM(A2, "utm_source") pulls the value instantly. Works with any parameter name.
4. SLUGIFY | Convert text to a URL-friendly slug
The spreadsheet way:
=LOWER(SUBSTITUTE(SUBSTITUTE(SUBSTITUTE(TRIM(A2)," ","-"),"&","and"),".",""))
And that only handles three characters. Real URLs need much more cleanup.
The Apps Script way:
/**
* Converts text into a URL-friendly slug
* @param {string} text The text to slugify
* @return {string} URL-safe slug
* @customfunction
*/
function SLUGIFY(text) {
return text
.toString()
.toLowerCase()
.trim()
.replace(/[^a-z0-9\s-]/g, '')
.replace(/[\s]+/g, '-')
.replace(/^-+|-+$/g, '');
} =SLUGIFY("10 Best SEO Tools & Tips (2026)") returns 10-best-seo-tools-tips-2026. Handles every special character.
5. WORDCOUNT | Count words in a cell
The spreadsheet way:
=IF(LEN(TRIM(A2))=0,0,LEN(TRIM(A2))-LEN(SUBSTITUTE(A2," ",""))+1)
This is the kind of formula that makes people question their career choices.
The Apps Script way:
/**
* Counts the number of words in a cell
* @param {string} text The text to count
* @return {number} Word count
* @customfunction
*/
function WORDCOUNT(text) {
if (!text) return 0;
return text.toString().trim().split(/\s+/).length;
} =WORDCOUNT(A2) does exactly what it says.
6. COUNTWORD | Count occurrences of a keyword in text
The spreadsheet way:
=(LEN(A2)-LEN(SUBSTITUTE(LOWER(A2),"keyword","")))/LEN("keyword")
The classic LEN/SUBSTITUTE division trick. Works, but try explaining why to a colleague.
The Apps Script way:
/**
* Counts how many times a word appears in text (case-insensitive)
* @param {string} text The text to search
* @param {string} word The word to count
* @return {number} Number of occurrences
* @customfunction
*/
function COUNTWORD(text, word) {
if (!text || !word) return 0;
const regex = new RegExp(word.replace(/[.*+?^${}()|[\]\\]/g, '\\$&'), 'gi');
return (text.toString().match(regex) || []).length;
} =COUNTWORD(A2, "SEO") counts every occurrence. Case-insensitive by default.
A quick note before we go further
The first 6 functions use only standard JavaScript, so they work as cell formulas out of the box. Type =DOMAIN(A2) and you get a result.
The next 6 are different. They use UrlFetchApp to make HTTP requests, and Google does not allow network calls inside custom functions. If you type =HTTPSTATUS(A2) in a cell, you’ll get a permission error.
The functions themselves work perfectly. You just need to call them from a custom menu instead of a cell formula. I’ll show you how to set that up after the last function.
7. HTTPSTATUS | Check the HTTP status code of a URL
The spreadsheet way:
Not possible with native formulas. You’d need an external tool or add-on.
The Apps Script way:
/**
* Returns the HTTP status code for a URL
* @param {string} url The URL to check
* @return {number} HTTP status code
*/
function HTTPSTATUS(url) {
try {
const response = UrlFetchApp.fetch(url, {
muteHttpExceptions: true,
followRedirects: false
});
return response.getResponseCode();
} catch(e) {
return 'Error: ' + e.message;
}
} Put a list of URLs in column A, run the function from the custom menu, and you’ve got a bulk status checker that returns 200, 301, 404, or whatever the server responds with.
8. METATITLE | Extract the page title from a URL
The spreadsheet way (Google Sheets only):
=IMPORTXML(A2, "//title")
This works… sometimes. IMPORTXML is rate-limited, breaks with JavaScript-rendered pages, and frequently returns #N/A errors when you need it most.
The Apps Script way:
/**
* Extracts the page title from a URL
* @param {string} url The URL to fetch
* @return {string} The page title
*/
function METATITLE(url) {
try {
const html = UrlFetchApp.fetch(url, { muteHttpExceptions: true }).getContentText();
const match = html.match(/<title[^>]*>([\s\S]*?)<\/title>/i);
return match ? match[1].trim() : 'No title found';
} catch(e) {
return 'Error: ' + e.message;
}
} More reliable than IMPORTXML, no rate limit surprises, and you can extend it to extract anything else from the page. Run it via the custom menu on a column of URLs.
9. ISINDEXABLE | Check if a page is indexable
The spreadsheet way:
Not possible. There’s no native formula that can read HTTP response headers or parse meta robots tags.
The Apps Script way:
/**
* Checks if a URL is indexable (no noindex directive)
* @param {string} url The URL to check
* @return {string} "Indexable" or the blocking directive
*/
function ISINDEXABLE(url) {
try {
const response = UrlFetchApp.fetch(url, {
muteHttpExceptions: true,
followRedirects: true
});
const headers = response.getHeaders();
const xRobots = headers['x-robots-tag'] || '';
if (/noindex/i.test(xRobots)) return 'Blocked: X-Robots-Tag noindex';
const html = response.getContentText();
const robotsMeta = html.match(/<meta[^>]*name=["']robots["'][^>]*content=["']([^"']*)["']/i);
if (robotsMeta && /noindex/i.test(robotsMeta[1])) return 'Blocked: meta robots noindex';
return 'Indexable';
} catch(e) {
return 'Error: ' + e.message;
}
} Run it on a column of URLs and it tells you whether each page can be indexed. It checks both the X-Robots-Tag header and the meta robots tag. Something no spreadsheet formula can do.
10. REDIRECTCHAIN | Follow the full redirect chain
The spreadsheet way:
Not possible.
The Apps Script way:
/**
* Returns the full redirect chain for a URL
* @param {string} url The starting URL
* @return {string} The redirect chain (separated by >)
*/
function REDIRECTCHAIN(url) {
const chain = [];
let current = url;
try {
for (let i = 0; i < 10; i++) {
chain.push(current);
const response = UrlFetchApp.fetch(current, {
muteHttpExceptions: true,
followRedirects: false
});
const code = response.getResponseCode();
if (code < 300 || code >= 400) break;
const location = response.getHeaders()['Location'] || response.getHeaders()['location'];
if (!location) break;
current = location.startsWith('http') ? location : new URL(location, current).href;
}
return chain.join(' > ');
} catch(e) {
return chain.join(' > ') + ' > Error: ' + e.message;
}
} For a URL like http://example.com, this returns something like http://example.com > https://example.com > https://www.example.com. Essential for migration audits and debugging redirect loops.
11. METADESC | Extract the meta description from a URL
The spreadsheet way (Google Sheets only):
=IMPORTXML(A2, "//meta[@name='description']/@content")
Same problems as IMPORTXML for titles. Rate-limited, fragile, and unreliable at scale.
The Apps Script way:
/**
* Extracts the meta description from a URL
* @param {string} url The URL to fetch
* @return {string} The meta description
*/
function METADESC(url) {
try {
const html = UrlFetchApp.fetch(url, { muteHttpExceptions: true }).getContentText();
const match = html.match(/<meta[^>]*name=["']description["'][^>]*content=["']([^"']*)["']/i);
return match ? match[1].trim() : 'No description found';
} catch(e) {
return 'Error: ' + e.message;
}
} Pair it with METATITLE and you’ve got a content audit tool. Run both on a column of URLs via the custom menu and you’ll have every title and description in seconds.
12. CANONICAL | Extract the canonical URL from a page
The spreadsheet way:
=IMPORTXML(A2, "//link[@rel='canonical']/@href")
Again, IMPORTXML with all its limitations. And it can’t check the HTTP Link header, which is another valid way to declare a canonical.
The Apps Script way:
/**
* Extracts the canonical URL from a page (checks both HTML and HTTP header)
* @param {string} url The URL to check
* @return {string} The canonical URL
*/
function CANONICAL(url) {
try {
const response = UrlFetchApp.fetch(url, { muteHttpExceptions: true });
const headers = response.getHeaders();
const linkHeader = headers['link'] || '';
const relCanonical = linkHeader.match(/<([^>]+)>;\s*rel=["']canonical["']/i);
if (relCanonical) return relCanonical[1];
const html = response.getContentText();
const match = html.match(/<link[^>]*rel=["']canonical["'][^>]*href=["']([^"']*)["']/i);
return match ? match[1] : 'No canonical found';
} catch(e) {
return 'Error: ' + e.message;
}
} This checks both the HTML <link> tag and the HTTP Link header. Something IMPORTXML simply cannot do.
13. SUBFOLDER | Extract a specific folder from a URL path
The spreadsheet way:
=IFERROR(LEFT(TRIM(MID(A2,FIND("#",SUBSTITUTE(A2,"/","#",4))+1,255)),FIND("/",TRIM(MID(A2,FIND("#",SUBSTITUTE(A2,"/","#",4))+1,255)))-1),TRIM(MID(A2,FIND("#",SUBSTITUTE(A2,"/","#",4))+1,255)))
This is arguably the worst formula on this list. It temporarily replaces slashes with # to find the Nth occurrence, then extracts text between them. The 4 at the end means “first subfolder.” Change it to 5 for the second subfolder. Good luck remembering that.
The Apps Script way:
/**
* Extracts a specific subfolder from a URL path
* @param {string} url The full URL
* @param {number} position Which subfolder (1 = first, 2 = second, etc.)
* @return {string} The subfolder name
* @customfunction
*/
function SUBFOLDER(url, position) {
var path = URLPATH(url);
if (path === 'Invalid URL') return path;
var parts = path.split('/').filter(function(p) { return p; });
return parts[position - 1] || '';
} =SUBFOLDER("https://example.com/blog/seo/tips/", 2) returns seo. Readable, flexible, and doesn’t require a PhD to modify.
14. CLEANURL | Strip protocol, www, and trailing slash
The spreadsheet way:
=SUBSTITUTE(SUBSTITUTE(SUBSTITUTE(SUBSTITUTE(A2,"https://",""),"http://",""),"www.",""),IF(RIGHT(SUBSTITUTE(SUBSTITUTE(SUBSTITUTE(A2,"https://",""),"http://",""),"www.",""),1)="/","/",""),"")
Four nested SUBSTITUTE calls and a conditional RIGHT check. And it still won’t handle edge cases like trailing query strings.
The Apps Script way:
/**
* Strips protocol, www, and trailing slash from a URL
* @param {string} url The URL to clean
* @return {string} The cleaned URL
* @customfunction
*/
function CLEANURL(url) {
return url.toString()
.replace(/^https?:\/\//, '')
.replace(/^www\./, '')
.replace(/\/$/, '');
} =CLEANURL("https://www.example.com/blog/") returns example.com/blog. Three lines, zero confusion.
15. TITLECHECK | Audit title tag length
The spreadsheet way:
=IF(A2="","Missing",IF(LEN(A2)>60,"Too long ("&LEN(A2)&" chars)",IF(LEN(A2)<30,"Too short ("&LEN(A2)&" chars)","OK ("&LEN(A2)&" chars)")))
Nested IF statements with LEN and string concatenation. Works, but adding more rules (like checking for brand name or pipe characters) turns it into a monster.
The Apps Script way:
/**
* Audits a title tag and returns its status
* @param {string} title The title tag text
* @return {string} Status with character count
* @customfunction
*/
function TITLECHECK(title) {
if (!title) return 'Missing';
const len = title.toString().length;
if (len > 60) return 'Too long (' + len + ' chars)';
if (len < 30) return 'Too short (' + len + ' chars)';
return 'OK (' + len + ' chars)';
} Easy to read, easy to extend. Want to add a check for duplicate pipes or missing brand names? Just add another if statement.
16. CLASSIFYURL | Auto-classify pages by URL pattern
The spreadsheet way:
=IF(ISNUMBER(SEARCH("/blog/",A2)),"Blog",IF(ISNUMBER(SEARCH("/product/",A2)),"Product",IF(ISNUMBER(SEARCH("/category/",A2)),"Category",IF(ISNUMBER(SEARCH("/tag/",A2)),"Tag","Other"))))
Every new category needs another nested IF(ISNUMBER(SEARCH(...))) wrapper. After five or six categories, this formula is unreadable and impossible to maintain.
The Apps Script way:
/**
* Classifies a URL based on its path pattern
* @param {string} url The URL to classify
* @return {string} The page type
* @customfunction
*/
function CLASSIFYURL(url) {
const rules = [
['/blog/', 'Blog'],
['/product/', 'Product'],
['/category/', 'Category'],
['/tag/', 'Tag'],
['/author/', 'Author'],
['/landing/', 'Landing Page'],
];
for (const [pattern, label] of rules) {
if (url.includes(pattern)) return label;
}
return 'Other';
} Adding a new category is one line. No nesting, no ISNUMBER, no SEARCH. And if your URL structure changes, you update the rules array instead of rewriting the whole formula.
How to use these formulas
- Open any Google Sheet
- Go to Extensions > Apps Script
- Paste the functions you want
- Save (Ctrl+S)
- Go back to your sheet and use them like any formula
Formulas 1-6 and 13-16 work directly as cell formulas. Type =DOMAIN(A2) and they appear in autocomplete. Anyone with access to the sheet can use them without knowing there’s code behind them.
For a deeper introduction to Apps Script, check out Introduction to Google Apps Script for SEOs.
Running the UrlFetchApp functions {#running-the-urlfetchapp-functions}
Functions 7-12 (HTTPSTATUS, METATITLE, ISINDEXABLE, REDIRECTCHAIN, METADESC, CANONICAL) use UrlFetchApp to make HTTP requests. Google doesn’t allow network calls inside custom cell functions, so these need to run from a custom menu instead.
Add this code alongside the functions above:
function onOpen() {
SpreadsheetApp.getUi()
.createMenu('SEO Tools')
.addItem('Run HTTPSTATUS', 'runOnSelection_HTTPSTATUS')
.addItem('Run METATITLE', 'runOnSelection_METATITLE')
.addItem('Run METADESC', 'runOnSelection_METADESC')
.addItem('Run ISINDEXABLE', 'runOnSelection_ISINDEXABLE')
.addItem('Run REDIRECTCHAIN', 'runOnSelection_REDIRECTCHAIN')
.addItem('Run CANONICAL', 'runOnSelection_CANONICAL')
.addToUi();
}
function runOnSelection(fn) {
const sheet = SpreadsheetApp.getActiveSheet();
const range = sheet.getActiveRange();
const urls = range.getValues();
const output = urls.map(row => [row[0] ? fn(row[0]) : '']);
sheet.getRange(range.getRow(), range.getColumn() + 1, output.length, 1).setValues(output);
}
function runOnSelection_HTTPSTATUS() { runOnSelection(HTTPSTATUS); }
function runOnSelection_METATITLE() { runOnSelection(METATITLE); }
function runOnSelection_METADESC() { runOnSelection(METADESC); }
function runOnSelection_ISINDEXABLE() { runOnSelection(ISINDEXABLE); }
function runOnSelection_REDIRECTCHAIN() { runOnSelection(REDIRECTCHAIN); }
function runOnSelection_CANONICAL() { runOnSelection(CANONICAL); } After saving and reloading the sheet, a new SEO Tools menu appears in the menu bar. Select your URLs, pick the function you want, and the results are written to the column next to your selection.
Want it plug-and-play?
If you’d rather skip writing code entirely, check out Unlimited Sheets by Nacho Mascort. It’s a Google Sheets add-on that gives you 35+ ready-made custom functions for SEO, including status code checks, SERP data extraction, sitemap parsing, keyword clustering, and AI-powered scraping. It’s the plug-and-play version of everything in this article (and more).
The takeaway
Every formula in this article replaces something that’s either impossible or painful in a regular spreadsheet. The pattern is always the same: write a small JavaScript function, save it, and your whole team gets a clean formula that does exactly what it says.
The best part is these are building blocks. Once you’re comfortable creating custom functions, you can combine them, extend them, and build tools that fit your exact workflow.
Have questions or want to share what you’ve built? Find me on 𝕏 @jlhernando or LinkedIn.
Get notified when I publish new tools, scripts, and articles.
No spam. Unsubscribe anytime.