Detecting Content Blockers is a losing battle, but you can be smart and ethical when doing so...
There's been a bit of a cat and mouse game between adblockers/content blockers and advertisers/analytics/trackers. The short answer is you aren't going to defeat them single-handedly. Many of the libraries designed to detect them will fail as they're inevitably blocked once a content blocker is updated to detect them. As someone who once ran a website, that hit 150,000 unique visitors a month funded by advertising, I'm sympathetic the publisher's plight. As a content writer, I value analytics, I use google analytics on this site as it helps me understand what content resonates, what channels people use to find my content and how they consume it. As developer with a touch fo UX, logging and error tracking is extremely helpful. A service like loggly can help me find errors, and design better to catch edge cases that aren't on the "happy path" and make data-driven decisions about a product. However, the advertising industry has perniciously proven they are not to be trusted. There's a reason why as a user I surf with Ghostery/1blocker, block cross-origin cookies (on my desktop, kill all cookies), use a VPN, and disabled flash long before most people to dodge the dreaded forever flash cookie. Privacy matters.
This is my attempt create an ethical framework around content-blocking from the perspective of a developer/content create/publisher.
A quick list of observations
I've assembled a list of facts/observations about content blockers.
- Adblock/Adblock Plus focus on advertising but not analytics. This could change in the future.
- 1blocker and Ghostery are particularly good content blockers. Both will block
<script>
tags from loading, or anyonerror
codes at the src level - Content blockers are not fooled by appending
<script>
tags via javascript to the DOM. - 1blocker and Ghostery will not be removed from the DOM, thus any checks to see if they exist will be true.
- 1blocker and Ghostery can detect anti-blockers popular scripts and prevent them.
- Browsers are more aggressively pushing privacy settings, FireFox leading the charge and Safari not far behind.
- If your website fails to work with one of the popular content blockers working, you are cutting out 20% of audience.
But I'm a special snowflake!
Using powers for good
So as a developer/UX designer you're suddenly faced with a problem. Your website or web app has features that break when content blockers are enabled. You've already made sure that your core functionality isn't tied to anything that will be blocked by content blockers.
Likely your client or manager will ask "can't you just go around the content blocker?".
The short answer is "No". You will not forcibly defeat content blockers, and if you try, you're signing up for the unwinnable, all consuming, cat and mouse game. However, you can potentially detect content blockers, rather than defeat them. With a service like Loggly, you can easily check if the _Ltracker
var has loaded.
Suddenly we're at the ethical precipice as we can do a number of things with this information. I've assembled a list of the ethical paths.
Ethics of content blocking code
Most Ethical:
Website/WebApp's core features work any warnings until user reaches an ancillary feature that may be broken. User is able to complete core functions (consume content, use navigation, submit forms).
Example: Videos still work. User is able to place orders but 3rd party chat tech support may be broken. User is informed.
Fairly Ethical:
User receives warnings on every page, encouraging to whitelist site regardless if functionality is affected.
Example: User is pestered with a whitelist site message. User is still able perform operations. Videos still work. User is able to place orders. 3rd party live chat tech support may be broken. User is informed.
Least Ethical:
User is blocked from consuming content until site is white listed regardless if functionality is affected.
No Ethical Stance: Site does not attempt to detect any blocked content. Site either functions or does not. This is the majority of websites.
This model isn't free of problems, its almost entirely from the lens of a non-advertisement supported website, like a campaign site / company site/ ecomm / SaaS. While these sites may contain advertising and tracking, all the aforementioned are either have revenue generated by sales (Sass/Ecomm) or lead generation (Campaign/Company). Websites that are dependent on ad-revenue adhere a different set of ethics and variables.
Other methods for checking for a script loaded.
Checking for variable existance is the most fail safe method to see if a script has loaded. While the onerror
will not work on an individual scrupt tag, you can write in scripts to the head with the following code. This though comes at a mild expense of code execution and may not work in all scenerios.