Can you monitor Google rankings by country without an expensive SEO tool?
Yes, if you keep the scope small and treat the result as a repeatable country snapshot, not a universal ranking truth.
That means this setup is a good fit when you need to:
track a limited keyword set
compare the same keywords across a few countries
store daily or twice-daily snapshots
answer “what changed?” without paying for a large platform
It is not the right fit when you need:
city-level local SEO grids
mobile parity at scale
very high request volume
guaranteed uptime
a commercial-grade rank tracker for clients
One point needs to be clear up front: Google’s official programmable search products are not the same thing as the live Google SERP you see on google.com. Google’s Custom Search JSON API overview says it returns results from a Programmable Search Engine, and that same page currently says the product is closed to new customers and that existing customers must transition by January 1, 2027. If your goal is “show me what Google Search looks like in this country,” you still need a real-country measurement workflow or a managed service that measures the live SERP for you.
What changes Google results from one country to another?
Country-level rankings change because Google does not serve one fixed result set to everyone.
Google’s own help pages say results can differ because of location, language, device type, time, and personalization. Google also says that even signed-out searches can still use search-related activity, general area, and browser context.
For this workflow, the variables that matter most are:
Exit IP location: the request should leave through the country you want to measure.
gl: the country hint in the search URL.hl: the interface language hint in the search URL.Session state: signed-in history, saved activity, and old browser state can skew what you see.
SERP volatility: the same query can shift as Google updates its index and rolls changes across data centers.
Here’s the practical takeaway: a VPN spot check from your everyday browser is not a monitoring system. It is a one-off observation mixed with your own browsing state. A usable rank monitor needs the same request shape, the same country mapping, the same storage fields, and the same verification rules every time.
What do you need before the first request?
You need a narrow, boring stack. That is a feature, not a limitation.
Use:
Python 3.10 or newer
requests,beautifulsoup4, andlxmla rotating residential proxy with country targeting
a signed-out, clean session
a short keyword list
a CSV or SQLite file to store history
If you’ve been comparing static vs rotating proxy, this is the easy call: use rotating residential for country-level rank checks. You are measuring many independent observations, not trying to preserve one long session.
If you’re still deciding residential proxy vs datacenter, residential is the better default here for a simple reason: country-level Google monitoring depends on realistic regional exits more than raw speed. Datacenter proxies can be fine for many automation jobs, but they are a weak default for cross-country Google measurements.
For a concrete example, Proxy001’s residential proxies match the needs of this workflow. As of April 22, 2026, the official site lists:
100M+ real residential IPs
coverage across 200+ countries and regions
country-to-city targeting
HTTP(S) and SOCKS5 support
unlimited concurrency
rotation per request or at a custom interval
a 500MB trial for new users
The same product page also shows the exact account flow you need for how to set up residential proxy in a script:
Sign up for an account.
Choose a residential proxy plan.
Open Sub-account Management.
Create a sub-account.
Go to Get Agent.
Copy the generated username, password, host, and port.
That last step matters. Do not invent proxy syntax inside your code. Generate the actual endpoint components in the dashboard and copy them into your script or environment variables.
Install the Python dependencies first:
python -m pip install requests beautifulsoup4 lxml
How do you build a clean country-specific Google request?
A clean request fixes the country, language, and session state on purpose.
Google’s search reference documents gl as a country code and hl as an interface language in its search parameter documentation. For country-level monitoring, those are the two parameters you should always set explicitly.
Use:
qfor the keywordglfor the target country, such asus,gb, ordehlfor the interface language, such asenordestartfor pagination when you want more than the first 10 results
A typical country-level request looks like this:
https://www.google.com/search?q=best+running+shoes&gl=us&hl=en
Keep three things consistent:
use a fresh session
stay signed out
match
Accept-Languagetohl
If you are measuring country-level rankings, you usually do not need uule. That parameter is more useful when you are trying to force an exact city or local context. For country monitoring, a country-matched exit IP plus fixed gl and hl is the right default.
How do you configure proxy endpoints without guessing provider syntax?
The cleanest fix is to keep provider-specific proxy syntax out of the code logic.
Instead of building a PROXY_TEMPLATE and trying to guess how country targeting should be encoded, use a full proxy URL per market. Generate that URL in your provider dashboard, then paste it into environment variables.
Example environment variables:
export PROXY_URL_US="http://USERNAME:PASSWORD@HOST:PORT" export PROXY_URL_GB="http://USERNAME:PASSWORD@HOST:PORT" export PROXY_URL_DE="http://USERNAME:PASSWORD@HOST:PORT"
On Windows PowerShell:
$env:PROXY_URL_US="http://USERNAME:PASSWORD@HOST:PORT" $env:PROXY_URL_GB="http://USERNAME:PASSWORD@HOST:PORT" $env:PROXY_URL_DE="http://USERNAME:PASSWORD@HOST:PORT"
If your provider uses a single host plus country targeting inside the username, generate the final country-specific form in the dashboard or docs first, then paste the finished URL into the environment variable. The script should not be responsible for reverse-engineering provider syntax.
That is the most reliable way to handle how to use residential proxies for monitoring work. It removes the one provider-specific variable that usually breaks the first run.
Step by step: a minimal Python monitor that stores country snapshots
The script below does four things:
sends a country-specific Google request
routes it through a country-matched rotating residential proxy
extracts organic results
appends a historical snapshot to a CSV file
import csv
import os
import time
from datetime import datetime, timezone
from urllib.parse import urlparse
import requests
from bs4 import BeautifulSoup
BASE_URL = "https://www.google.com/search"
KEYWORDS = [
"your keyword 1",
"your keyword 2",
]
MARKETS = [
{"name": "US", "gl": "us", "hl": "en", "proxy_env": "PROXY_URL_US"},
{"name": "UK", "gl": "gb", "hl": "en", "proxy_env": "PROXY_URL_GB"},
{"name": "Germany", "gl": "de", "hl": "de", "proxy_env": "PROXY_URL_DE"},
]
TARGET_DOMAIN = "example.com"
RESULT_DEPTH = 30
REQUEST_DELAY_SECONDS = 5
OUTPUT_FILE = "country_rank_snapshots.csv"
def new_session(hl: str) -> requests.Session:
session = requests.Session()
session.headers.update(
{
"User-Agent": (
"Mozilla/5.0 (Windows NT 10.0; Win64; x64) "
"AppleWebKit/537.36 (KHTML, like Gecko) "
"Chrome/135.0.0.0 Safari/537.36"
),
"Accept-Language": f"{hl},en;q=0.8",
}
)
return session
def proxy_for_market(market: dict) -> dict:
proxy_url = os.environ[market["proxy_env"]]
return {"http": proxy_url, "https": proxy_url}
def verify_exit_country(proxies: dict) -> tuple[str, str, str]:
response = requests.get("https://ipinfo.io/json", proxies=proxies, timeout=20)
response.raise_for_status()
data = response.json()
return data.get("country", ""), data.get("region", ""), data.get("city", "")
def fetch_html(session, proxies, keyword, gl, hl, start):
params = {"q": keyword, "gl": gl, "hl": hl, "start": start}
response = session.get(BASE_URL, params=params, proxies=proxies, timeout=30)
response.raise_for_status()
return response.text
def extract_organic_results(html: str) -> list[dict]:
soup = BeautifulSoup(html, "lxml")
results = []
seen = set()
for anchor in soup.select("#search a"):
href = anchor.get("href", "")
h3 = anchor.find("h3")
if not h3:
continue
if not href.startswith("http"):
continue
if href in seen:
continue
title = " ".join(h3.stripped_strings).strip()
if not title:
continue
seen.add(href)
results.append({"title": title, "url": href})
return results
def domain_matches(result_url: str, target_domain: str) -> bool:
host = urlparse(result_url).netloc.lower()
target = target_domain.lower()
return host == target or host.endswith("." + target)
def collect_snapshot(keyword: str, market: dict) -> list[dict]:
proxies = proxy_for_market(market)
session = new_session(market["hl"])
all_results = []
for start in range(0, RESULT_DEPTH, 10):
html = fetch_html(
session=session,
proxies=proxies,
keyword=keyword,
gl=market["gl"],
hl=market["hl"],
start=start,
)
page_results = extract_organic_results(html)
if not page_results:
break
all_results.extend(page_results)
time.sleep(REQUEST_DELAY_SECONDS)
deduped = []
seen = set()
for item in all_results:
if item["url"] in seen:
continue
seen.add(item["url"])
deduped.append(item)
if len(deduped) >= RESULT_DEPTH:
break
collected_at = datetime.now(timezone.utc).isoformat()
target_position = None
for index, item in enumerate(deduped, start=1):
if domain_matches(item["url"], TARGET_DOMAIN):
target_position = index
break
rows = []
for index, item in enumerate(deduped, start=1):
rows.append(
{
"collected_at_utc": collected_at,
"market": market["name"],
"gl": market["gl"],
"hl": market["hl"],
"keyword": keyword,
"rank": index,
"url": item["url"],
"title": item["title"],
"target_domain": TARGET_DOMAIN,
"target_position": target_position,
}
)
country, region, city = verify_exit_country(proxies)
print(f"[{market['name']}] exit country check: {country}, {region}, {city}")
return rows
def append_rows(file_path: str, rows: list[dict]) -> None:
if not rows:
return
fieldnames = list(rows[0].keys())
file_exists = os.path.exists(file_path)
with open(file_path, "a", newline="", encoding="utf-8") as handle:
writer = csv.DictWriter(handle, fieldnames=fieldnames)
if not file_exists:
writer.writeheader()
writer.writerows(rows)
def main():
for market in MARKETS:
market_rows = []
for keyword in KEYWORDS:
market_rows.extend(collect_snapshot(keyword, market))
append_rows(OUTPUT_FILE, market_rows)
print(f"Done. Appended snapshots to {OUTPUT_FILE}")
if __name__ == "__main__":
main()What should a correct first run produce?
A correct run should give you a file you can audit without any extra tooling.
Your CSV should contain fields like:
collected_at_utcmarketglhlkeywordrankurltitletarget_domaintarget_position
Example row format:
collected_at_utc,market,gl,hl,keyword,rank,url,title,target_domain,target_position 2026-04-22T14:00:00+00:00,US,us,en,best running shoes,1,https://www.example.com/...,"Best Running Shoes for 2026",example.com,3
The first run is not complete until you verify three things:
The exit-country check matches the market you intended to query.
The CSV contains ranking rows, not an empty file.
The
target_positionvalue is plausible when you compare it with the stored URLs.
If you want a simple manual sanity check, use one keyword and two different countries first. If the domain mix and ordering never change at all, your country targeting probably is not working.
How do you turn one scrape into a monitoring loop?
A monitor starts on day two, not day one.
For a small workflow, keep it simple:
track the same keyword set every day
append every run to the same historical store
compare
target_positionover timeinspect country-by-country changes instead of averaging everything together
If you only need a lightweight history, CSV is enough. If the file becomes annoying, move the same schema into SQLite.
You do not need a fancy scheduler. One daily run is a good default.
On Linux or macOS, use cron:
0 6 * * * /usr/bin/python3 /path/to/rank_monitor.py >> /path/to/rank_monitor.log 2>&1
That runs the script every day at 06:00.
On Windows, use Task Scheduler. Microsoft documents that Task Scheduler can run tasks at a specific time on a daily or weekly schedule. Create a basic task that:
runs daily
points to
python.exepasses your script path as the argument
Example:
Program:
C:\Python311\python.exeArgument:
E:\rank-monitor\rank_monitor.py
That is enough for a small country-level monitor. Do not overbuild the scheduling layer before the data itself is clean.
How do you verify the rankings are really from the target country?
Do not trust a country report unless you verify the country.
Use this checklist for your first verified run:
confirm the exit IP country with a public IP service
make sure
glmatches the same countrymake sure
hlmatches the language context you wantkeep the session signed out
compare one query across at least two countries and confirm the result mix changes
One subtle point is worth keeping in mind. Google says results can differ for reasons beyond direct personalization, including location, language, device, and timing. So your goal is not perfect sameness across repeated runs. Your goal is consistent country logic under fixed inputs.
Troubleshooting: what usually breaks first?
1. You get a 403, a challenge page, or repeated CAPTCHAs
The usual mistake is assuming this is always an IP problem. Sometimes it is. Sometimes it is not.
A residential exit changes your network origin. It does not automatically make a raw HTTP client behave like a real browser. That means repeated blocks can still happen even when the proxy pool itself is fine.
For this kind of country monitor, the safe response is:
shrink the keyword set
reduce frequency
add more spacing between requests
stop trying to scale the script into a high-volume scraper
If repeated blocks start after only a few requests, treat that as the point where DIY stops making economic sense. Move to a managed SERP source instead of turning this into an anti-detection project.
2. The results look like the wrong country
This is usually a configuration mismatch.
Check:
the proxy endpoint really maps to the country you intended
glmatches that countryhlis consistent with your language targetyou are not reusing a dirty session from another market
If you use a provider dashboard to generate endpoints, regenerate them there. Do not fix this by guessing new username patterns in your code.
3. The parser suddenly returns empty or noisy data
Google changes HTML. That is normal.
Keep one raw HTML sample from a working run so you can compare later if the selector breaks. The extraction function in the script is intentionally narrow: it looks for result blocks that include an h3 and a real outbound link. That works for a lightweight monitor, but it is still HTML parsing, and HTML parsing always needs maintenance.
4. The script runs, but the stored rank is obviously wrong
That usually means your parser is picking up non-organic links, or your target-domain matcher is too broad.
Check two things:
whether the first 10 stored URLs are actually organic result URLs
whether
domain_matches()is matching only the domain you mean to track
If your site uses several subdomains, decide that rule up front. Otherwise you will spend more time arguing with the data than using it.
When should you stop DIY and switch to a managed source?
Stop when the monitoring workflow becomes harder to maintain than the insight is worth.
That usually happens when you need one or more of these:
lots of countries
lots of keywords
stricter uptime
city-level accuracy
more than organic links
stable collection under higher request volumes
A small Python workflow is still worth it when you want direct visibility and very low cost. It is not worth it when you start spending more time fixing selectors, debugging country mismatches, and dealing with block noise than you save on tooling.
That is the line to watch. The first request is easy. The first month of reliable daily snapshots is what tells you whether the workflow is actually sustainable.
Compliance note
This workflow should stay inside a narrow, documented use case.
Use it only when all of the following are true:
you are checking public search result pages
you are monitoring markets, keywords, and domains that your team is authorized to analyze
your organization is comfortable with the measurement method under its legal, compliance, and crawler-use policies
the request volume stays low enough to remain an internal measurement workflow rather than a scaled extraction project
Before you schedule the script, review the target platform’s applicable terms and your own internal policy for search monitoring or web data collection. If you begin hitting repeated challenge pages, 403s, or obvious block signals, stop expanding the workflow and reassess the collection method instead of increasing aggressiveness.
Do not use this setup for:
account creation or account farming
ticketing or checkout automation
personal-data harvesting
bypassing geographic content rights
bulk collection far beyond the monitoring scope described here
If you want to test this workflow with a proxy stack that already matches the technical requirements, Proxy001 is a practical starting point. Its official residential proxy page lists rotating residential IPs across 200+ countries and regions, country-to-city targeting, HTTP(S)/SOCKS5 support, unlimited concurrency, and request-level or custom-interval rotation. Its pricing page currently lists residential plans from $2/GB on the smallest tier down to $0.55/GB on the largest tier, and the site also states that new users can request a 500MB trial. For a small internal rank monitor, that is enough to test a few markets without committing to a full SEO platform on day one.








