The Evilization of Google—And What to Do About It
Understanding and Undoing Google's Dominance over the Internet
“Don’t Be Evil”
My first encounter with Google occurred in 2000. I was active on Usenet newsgroups, and Google had just bought DejaVu, a search engine for these newsgroups. I was now having to deal with Google rather than DejaVu. Google quickly changed how search results for newsgroups were delivered, omitting some information I previously found helpful. I remember thinking that Google’s changes were not for the better.
Nonetheless, I quickly did find that Google’s search of the internet as a whole was much better than Altavista, Excite, and the other search engines available at the time. Basing its search approach on the model of citation indexes, where importance of an article is gauged by which and how many other articles reference yours, Google’s PageRank algorithm quickly set the standard for internet search.
Larry Page and Sergey Brin, two nerds who left Stanford before getting their master’s degrees to found Google, were at the start endearing. They helped to bring order to the internet and they seemed keen to make the internet’s information freely available, unfettered by ideology or politics. Their freewheeling use of information got them in hot water with copyright violations (as when they attempted to scan and put online all the world’s books). But in its early days, Google seemed, on balance, a positive force. Even its quirky motto, “Don’t be evil,” suggested a harmless insouciance. And their mission, already articulated in the late 1990s and supposedly still in force to this day, was lofty: “to organize the world's information and make it universally accessible and useful.”
In 2015 Google’s parent company Alphabet retired the old motto, now substituting for it “Do the right thing.” The old motto was better. Negation has advantages that positive assertions lack. Our First Amendment, for instance, doesn’t extol doing right by allowing free speech. Rather, it simply forbids the federal government from making any law abridging free speech. Once the federal government gets into the business of allowing free speech, it can define what’s allowable free speech. And you need only look at our northern neighbor or our friends across the Atlantic to see how that’s working out.
In the same vein as our First Amendment, the Roman rhetorician Quintilian remarked, “Write not so that you can be understood, but so that you cannot be misunderstood.” There’s a power and clarity in no that’s absent from yes. It’s no accident that Judaism’s Ten Commandments and the Buddha’s five basic moral precepts are formulated as negations. The ability to say no is the mark of freedom. “Resist the devil [i.e., say no to the devil] and he will flee from you.” (James 4:7) The mark of tyranny, by contrast, is to tell you what to think and do, and not take no for an answer. Or, as memorably captured by Don Corleone in the Godfather, “I’m gonna make him an offer he can’t refuse.”
Affirmations on the whole, however, are more pleasant than negations. Hence the shift of mottos with Google. Who doesn’t want to say that they’re doing the right thing rather than merely avoiding evil? The power of human rationalization is such that it’s easy to convince ourselves that we are doing the right thing even when we are merely being self-serving and acting on perverse incentives.
Lord Acton’s admonition about power corrupting and absolute power corrupting absolutely has, in the years since its new motto, been borne out at Google. Doing the right thing at Google has come to mean doing right by itself. And because Google views itself as such a positive force in the world, doing right by itself is now interpreted by Google also to mean doing right by the world. But interests are never so perfectly aligned. The new motto has become a self-serving rationalization.
More so, Google has become evil. Sure, it has no compunction about crushing small businesses for which its services provide crucial infrastructure. Sure, it exercises monopolistic control in violation of, if not anti-trust laws directly, then their spirit. But Google’s main evil is that it commoditizes people, both individually and corporately, treating them as “users” to be exploited and manipulated. It values users (people) only for the profit they bring to Google, which it understands not only as the money it is able to make off of users but also as the ideological and political manipulation with which it is able to sway users to advance its public policy agendas.
At every turn, Google exploits users for profit, even if this means misleading them and, as the designation implies, “using” them. Ultimately, Google is only too ready to spit out users, demonetizing, delegitimizing, and deplatforming them when better profits can be made elsewhere, and especially when recalcitrant users need to be punished for not toeing the Google line. In Kantian terms, Google treats people as means and not as ends, fundamentally disrespecting our humanity.
To gain some perspective on just how commanding Google is, consider how SEMrush gauges its footprint on the internet. SEMrush is a service I’ve used in my online businesses to gauge how well webpages and websites are doing in terms of traffic and keyword searches. SEMrush lists the following as the top twenty sites in the world by monthly web traffic. These numbers are as of July 2024. If you run your eyes down this list, you’ll see that Google and YouTube together, which are both owned by Alphabet, have twice the traffic of the next eighteen sites combined—over 200 billion visits per month for Google and YouTube. Here are the sites in descending order of total monthly visits:
Google: 131.2 billion visits
YouTube: 71.4 billion visits
Facebook: 12.97 billion visits
Wikipedia: 6.93 billion visits
Instagram: 6.5 billion visits
Reddit: 5.8 billion visits
Pornhub: 5.39 billion visits
Bing: 4.77 billion visits
X (formerly Twitter): 4.35 billion visits
WhatsApp: 3.83 billion visits
XVideos: 3.62 billion visits
Yahoo: 3.46 billion visits
Twitter.com: 3.31 billion visits (alternative domain traffic)
Taboola: 3.29 billion visits
Amazon: 3.23 billion visits
ChatGPT.com: 3.10 billion visits
Yandex: 3.05 billion visits
DuckDuckGo: 3.04 billion visits
Taboola News: 3.03 billion visits
TikTok: 2.61 billion visits
It’s in the nature of power to make evil more evil. So if Google has turned to the dark side, it has vast resources to be very evil indeed. Still, I want to be careful about throwing around the term evil too cavalierly in reference to Google. There’s much that Google does that’s positive, at least on the surface. Who doesn’t use and enjoy Google Maps, Google’s Chrome Browser, Gmail, and a host of other (apparently) free services that Google makes available to internet users? True, these services are there ultimately to inveigle users into Google’s profit engine. But most of us are willing to suspend cynicism while enjoying Google’s many offerings. What’s more, nothing is totally evil. Still, there’s enough evil in Google that it is, for now, more on the side of Darth Vader than Obi-Wan Kenobi.
The Dependence of SEO Businesses on Google
Since 2010, I’ve worked on educational websites, software, and technologies. This has included doing SEO (Search Engine Optimization) for content-based online businesses. By an SEO content-based online business, I mean a business that builds web content in some niche by writing and posting relevant articles, attempting to get them to rank highly with the search engines for certain keywords (which can be individual words or multi-word terms). Reference to “search engines” in this context is a euphemism. Throughout my time doing SEO, Google controlled over 90 percent of search, a market share it continues to hold to this day (actually, to be precise, by the end of 2024 Google market share finally dipped below 90 percent to 89.98 percent). An SEO business therefore needs to please Google or else it is dead in the water.
The business model for an SEO business looks something like the following: Such a business attempts to draw people to its webpages and websites by getting its content (mainly articles but also infographics, videos, etc.) to rank highly with the search engines, which is to say Google. Thus, when people use Google to search for keywords relevant to the business, they will naturally tend to find the business’s content because it ranks highly with Google. The business then monetizes (makes money off) the traffic from those searches once users click on links from Google’s SERPs (Search Engine Results Pages) and thus find their way to the business’s website.
The traffic that comes to websites through SEO is said to be “organic,” as distinguished from ad-driven traffic. For ad-driven traffic, businesses must pay a third party (usually Google, which controls an immense amount of ad space and brings in an immense amount of ad revenue). As we’ll see, Google has a perverse incentive to get SEO businesses that depend on organic search to shift to an advertising model in which Google gets paid ad revenue.
SEO businesses need to pay for SEO, but that money doesn’t go to Google. Rather, it goes to crafting content of interest to users and making sure that content gets linked to by parties interested in the content. Because Google’s PageRank algorithm is modeled on academic citation indexes, where importance of research is gauged by citations to the work (both in quantity and quality), for content to rise to the top of the Google SERPs typically requires a lot of links (as happens when content goes “viral”). SEO businesses pay to procure such links, often by having people on staff that contact other websites and try to interest them in their content and incentivize them to link to it.
An SEO business needs to please Google or else it is dead in the water.
To see the difference between organic vs. paid traffic, simply “google” a given keyword. Suppose, for instance, that right now you tried googling “best online colleges.” The first items Google presents are sponsored, which means that these are ads. I’m seeing four such ads (two from the online schools Liberty and SNHU, and then from some websites that promise to help prospective students find the right school). You may see different sponsored items than I do (Google adapts its search results to specific user profiles, which as we’ll see is a prime temptation for it to do evil).
The sponsored items are paid for by companies as advertising revenues to Google. These can be enormously expensive. I don’t have the precise current figures for the keyword “best online colleges,” but several years back someone in the educational space told me that these ads cost around $70. So if you click on these ads, you are costing the advertisers a lot of money and you are putting a lot of money in Google’s pocket. Google makes over $200 billion a year from these ads, which is about two-thirds of Alphabet’s (Google’s parent company’s) total revenues.
The actual cost of a Google ad for a given keyword is decided through an auction where companies bid on how much they are willing to pay for an ad. To see how lucrative some of these sponsored items (ads) for particular keywords can be to Google, here is a list of some of the higher-valued keywords. I list the keyword first and then the average cost per click:
mesothelioma cancer lawyer ... $226.50
virginia mesothelioma lawyers ... $224.00
hawaii mesothelioma lawyers ... $207.90
mesothelioma lawyer new jersey ... $178.70
oklahoma mesothelioma attorneys ... $164.90
maryland mesothelioma attorneys ... $163.20
florida mesothelioma lawyers ... $162.00
mesothelioma lawyer massachusetts ... $158.50
kansas mesothelioma lawyer ... $155.30
mesothelioma lawyer west virginia ... $147.80
All of these keywords combine mesothelioma (a cancer caused by asbestos) and reference to an attorney or lawyer. The intent of someone doing such a search is presumably to find a lawyer to represent a prospective client who has mesothelioma through asbestos exposure. Legal firms thus pay hefty sums for these ads to attract clients seeking damages for being negligently exposed to asbestos and coming down with mesothelioma. The damages can be quite large. Attorneys representing plaintiffs in such cases typically work on a contingency basis. The going rate is 33 to 40 percent of the settlement or award. Hence these high costs per click.
Returning to our “best online colleges” search example, with its smaller but still sizable cost per click, we find that after the sponsored items comes still more promotional material from Google. Thus, I’m seeing (maybe it’s different for you) an AI (artificial intelligence) generated summary of what Google is putting forward as the ten best online colleges. Sometimes Google also includes a knowledge panel that summarizes some topic or gives a biosketch of some person queried in a search.
Google has all this information from searching the web and re-presenting what it finds there. Just as financial institutions are not entrepreneurs and thus don’t create value but merely invest in existing value, so Google is not a content creator but merely highlights existing content. And yet, Google’s re-presentation of information created by others makes it less likely that users will actually visit the articles and sites where the creators originally presented the information. Accordingly, Google’s business expands at the expense of the sites whose content it re-presents.
Note that I’m using “re-present” differently from “represent.” Google re-presents, as in presents again, material that it ingests and regurgitates. This is material that, properly speaking, belongs (as intellectual property) to others but that Google helps itself to as though it were its own. This is a potential chink in its armor that could lead to significant legal exposure if the laws surrounding its exorbitant privileges as a platform were changed.
Right now, Section 230 of the Communications Decency Act (47 U.S.C. § 230) grants online platforms like Google and Facebook legal immunity for user-generated content, ensuring they are not considered publishers or speakers and therefore cannot be held liable for what users post, even if the content is false, defamatory, or otherwise problematic. Additionally, this law allows platforms to moderate, filter, or remove content they find objectionable without losing their immunity. Moreover, for copyright issues, the Digital Millennium Copyright Act (DMCA) (17 U.S.C. § 512) provides a separate legal framework, offering platforms safe harbor protection against copyright infringement claims as long as they comply with takedown requests from copyright holders. Together, Section 230 and the DMCA enable platforms to avoid direct legal exposure for third-party content, allowing them to host, curate, and distribute vast amounts of information without assuming liability.
But consider, suppose you do a Google search for images about some topic. You’ll see Google re-presenting image after image. Sure, you can click on an image that interests you and from there click on a link that will take you to the site where the image originally appeared. But often when I’m searching for images, it’s enough for me to see what Google re-presents. I suspect that’s the case for many Google users. Yet for some of those images, if they appeared on individual websites, not only could the owners of the images request that the image be taken down but they could also claim copyright violations and seek damages.
On my websites, I must—as a matter of survival—be scrupulous about making sure that we have the rights for any images we use. Otherwise, legal exposure from copyright violations could destroy my business. Sometimes, despite our best efforts at scrupulosity, an image slips through the cracks (perhaps a new employee hasn’t been properly trained about only uploading images for which we have proper usage rights). It doesn’t matter if the image is only up on our site for a day. There’s a law firm in Switzerland that trolls the internet in real time for copyright violations.
I’ve had this firm come after me for images to which we didn’t have the rights. It cost me about $700 an image to get them off my back. What if Google was hit with copyright damages for every Getty image that it re-presented? Why does Google get away with such copyright violations but little guys like me don’t? Google has effective lobbyists and has, to date, been able to make the laws work in their favor—such as through Section 230 and the DMCA. But that can change.
Okay, after all the ways that Google inserts itself into your searches—after all the stuff Google shows in response to your search query, stuff that in one form or another is advertising and contributes to its bottom line—you finally get to the organic results. The organic results are the search results that Google delivers on the merits of the webpage in question and the authority of the website hosting the page—at least that’s Google’s official story. In fact, how Google assesses the merit of webpages is itself biased and often in service of, if not its bottom line, then broader political and ideological aims (more on this later).
Even with organic results, once you finally wade through the Google ads to reach them (sometimes you have to scroll “below the fold”), you can still find Google inserting itself with further ads and commentary. For the search on “best online colleges,” Google lists four sponsored items before listing the organic results. Interspersed among the organic results is another ad. And after all the organic results on the first search engine results page, Google includes three more sponsored items. These are ads, and collectively clicking on all of those delivered in response to the “best online colleges” query would yield Google several hundred dollars.
SEO content sites make more money when people (objectified as “users”) search on a keyword likely to signify high intent (as with people searching on a keyword that combines “mesothelioma” and “lawyer” probably want to sue somebody for giving them this cancer). Low intent is unrewarding, high intent is lucrative. For an example of low intent, consider users that search on the keyword “good careers.” They are most likely still trying to figure out what to do with their lives and thus unlikely to enroll in a course of study or take some action that requires an investment in capital and time. They’re just collecting information and unlikely to act on it. But users that search on the keyword “best online mba programs” are likely thinking of applying to and enrolling in such an online program. Simply by searching on such a keyword, they are likely to have high intent.
The money is in high intent. Users searching on “best online mba programs” will quickly find themselves taken to an article that purports to rank the best online MBAs. Ranking lists are especially good in eliciting high-intent traffic (idle curiosity aside, why ask about what are the best schools in a niche unless you’re thinking of attending one?). Landing on such an article, users are then likely to click on some call to action in which they request information about a particular online MBA program. Such a request then constitutes a lead. These leads can be lucrative. I have a colleague who generates such leads for a big online university and receives $175 per lead. A lead doesn’t guarantee enrollment, but it raises the probability of enrollment. Some years back, schools wanted to see about 2 percent of leads issue in enrollments. At $175 per lead, that would mean the school is paying close to $9,000 per enrollment.
Some years back, I ended up selling one of these educational websites for what to me was a hefty sum. As it is, I knew the academic world well and was able with the help of a solid team to write articles that very quickly shot up high with the search engines on lucrative high-intent keywords. The holy grail in the online educational space was being in the first position on the first SERP (Search Engine Results Page) for the keyword “best online colleges,” the most lucrative keyword in the business at the time (hence the previous exercise searching on this term). For a year or so we had the very top position and for several years we were in contention for it.
The place you most want to be is in the first position on the first SERP. People doing searches are most likely to click on that position first, the second position with less frequency, and so on. A decade ago, if you didn’t make it to the first SERP, the top of the second SERP still wasn’t too bad, though the drop off in traffic to your site in that case was significant. Currently, with all the stuff that Google puts in the way of organic results for lucrative keywords, unless you are in the first few positions of the first SERP, you’re probably wasting your time trying to make money off organic search. By the way, Google currently lists USNews at the number one spot for the keyword “best online colleges.”
Becoming a Slave to Google
As mentioned, I got into SEO content-based educational websites around 2010. It was a wild west back then—and easy, at least by present standards, to make money if you knew your way around the educational world, which I did. There was much low-hanging fruit, as they say. Things have become much more competitive these days, both because of the sheer number of entrants into the field and because of the way Google gobbles up and regurgitates the information it absorbs from SEO content-based sites, giving people less and less reason to visit sites that produce original content.
With regard to my SEO content businesses, I have strong positive and negative feelings about Google. On the positive side, there’s no way I could have made the money I did ten years ago but for search engines like Google helping people to find my websites. That said, Google’s control over my business and other businesses in my space was troubling even back then and became overwhelming over time. In the name of making the web better, Google would periodically update its algorithm. Their algorithm is not open source, so you never know what an algorithm update is going to do to your website. After an update, you’re always in the position of someone who has to second guess what Google did. You need to become a reverse engineer, but without ever seeing under the hood.
Thus, with one of my cherished websites, Google updates in 2014 and 2017 reduced my business by 50 percent or more. Other websites I knew of in my space were in some updates simply wiped out. When you get hit by a Google update that drastically undercuts your business, you are left reeling. All your attention is then on trying to figure out what got you penalized, redress it, and thereby recover your business. Perhaps you can get it back. Perhaps you can’t. In 2014, I had a lull of four months where my business was reduced by 50 to 60 percent. And then one weekend in May the business rebounded, and by the end of the summer my business was 400 percent over where it had been at its prior best.
But the hit I took in 2017 was much harder. I was optimistic that I could get the website back to reaching its former profitability. But the space in which my website operated was getting much more crowded and competitive. And the hassle of dealing with Google’s updates and never quite being sure whether you had run afoul of Google’s ever-changing standards were more than I wanted to deal with. Like an athlete who gets injured, I had to assess the seriousness of the injury that Google inflicted on the site. Was it brief? Was it season ending? Was it career ending? From what I saw with colleagues in my space, any time you got hit by Google, it could be any one of these.
So I sold the site. The payout included an earnout based on the site reaching and then exceeding its former glory. With my team being part of the acquisition deal, the former profitability plus some extra was achieved, and so I received my earnout. While I think I made the right decision in selling the site at the time, I didn’t like feeling that I was forced to sell because of Google’s arbitrariness in deciding which websites to reward and which to punish. Google obviously wouldn’t put it that way, portraying itself as calling the web to ever higher standards of excellence. But among small business owners like me, the perception that Google’s updates were in large measure self-serving and arbitrary was widespread. Moreover, without transparency from Google about how it was updating its algorithm, this perception is hard to refute.
Ranking with search engines is a zero-sum game. If your site gets knocked lower in position on an SERP, another site is taking over your spot. As I saw webpages getting reshuffled on Google’s SERPs for given keywords, it was hard to discover a compelling rationale for the reshuffling. Sometimes the rationale could be discovered. I recall one site having a page that suddenly came to rank highly for a major keyword in the educational space. As it is, this page listed colleges that no other site had ever ranked highly in that keyword category. What gave the article traction with Google is that schools that had never ranked highly in that category suddenly felt gratified to be ranked highly. Consequently, they linked to the page, thereby signaling to Google that it should take note of the article, ranking it highly for that keyword. As it is, .edu sites get a high domain authority from Google, and so links from .edu sites are enormously helpful for bolstering a site’s SEO.
As a matter of public relations, Google would say that their updates are meant to reflect improvements in the quality of websites. And granted, many websites try to game Google using gray and black hat methods to raise their site’s traffic, and these methods do require some response from Google. Gray hat methods exploit ambiguities or loopholes in Google's guidelines without overtly violating them. Examples include keyword stuffing (filling articles with multiple mentions of a keyword), purchasing low-quality backlinks, or cloaking content to appear more relevant to search queries. These tactics are risky because once Google catches on to them, it will update its guidelines to punish them, and the punishment can fall without warning.
Black hat methods, by contrast, flagrantly violate Google's policies and rely on outright deception to game their algorithm. Examples include hiding spammy keywords in invisible text (one site I heard about put entire links in periods at the end of sentences), using link farms, or deploying bots to create fake engagement. Black hat methods also include actively sabotaging competitor sites, as by giving them toxic backlinks (such as having porn sites link to them). While black hat methods often achieve faster results, they risk severe penalties from Google, such as being delisted from search results entirely, compared to the more subtly unethical gray hat practices.
But having now endured many of Google’s updates (we currently get several a year), I sense less a desire on its part to make the web better and more a need on its part to simplify how it adjudicates merit, with a strong tendency to value big sites and those with the resources to continually update their content. The rich get richer and new entrants face ever more stringent barriers to entry. In the education space, for instance, the site I sold some years back has since lost most of its business. Monster sites like USNews, Forbes, and Niche as well as discussion forums like Reddit and Quora now suck up most of the oxygen in the educational space (after Google has taken first dibs at the oxygen through its ads).
As it is, Google’s artificial intelligence is simply not so good that it can actually determine what are the best pages that answer a keyword query. Google’s proprietary search algorithm, which determines the ranking of web pages in search results, lays claim to a number of patents, but in reality it is a trade secret, especially to the extent that it incorporates updates whose workings are entirely proprietary and opaque—like Panda, Penguin, and the more recent HCU (Helpful Content Update—for which I’ve yet to find a colleague who regards it as helpful).
Google’s publicly shared guidelines and policies, such as its Webmaster Guidelines, are supposed to describe best practices for webmasters to optimize their sites in line with Google’s stated vision for a rich and vibrant web. But dutiful adherence to the guidelines does not guarantee optimal rankings. What Google says it wants in websites, and what it rewards and punishes in them are often two different things. This leads to a slavish mentality that always tries to second guess whether Google will favor some piece of content or way of expressing it. Also, Google assesses websites not simply via its algorithm. Google outsources over 100,000 jobs, and these include quality raters who look over sites to determine how they are doing with respect to such criteria as EAT (= Expertise, Authority, and Trust)
Enduring presence, in the form of a website’s high authority over a long history, seems always to be a prime ranking factor for Google. The USNews rankings are not the best by any means, and yet Google rewards them with consistently high search results, in part because they’ve been around the longest and schools cannot afford to go down in their rankings (a college or university that goes down in an annual USNews ranking regards this as a tragedy and going up as a cause for celebration). Many academics in fact regard the USNews rankings as ridiculous, and for good reason, as the following video makes clear (don’t let its humor distract you from the truth):
It also seems that Google has quid pro quo understandings with certain sites, which lead it to rank them highly for certain searches. Google and Wikipedia have a tacit agreement that Google will highly rank any Wikipedia entry when it appears as a keyword in search. Recently, Google agreed to pay Reddit $60 million annually to use its content to train its artificial intelligence. One benefit to Reddit has been that it now rises high for searches in the educational space. For instance, near the time of this writing, it came up in organic search as number three (third spot on the first SERP) for the keyword “best liberal arts colleges.” Here’s the Reddit article, which is just a brief opening paragraph and some comments, none of which is all that insightful.
Here’s an article from AcademicInfluence.com that actually answers the query about which are the best liberal arts colleges in the US. Back in 2021, this article appeared high on Google’s first SERP, but now it resides in search-engine oblivion. Last I checked, in a Google search on the keyword “best liberal arts colleges,” the link to the AcademicInfluence.com article on best liberal arts colleges appears not until the eighteenth SERP, buried so deep that most people searching for that keyword will never find it.
Most of the high-ranking SERPs for this keyword simply list the websites of colleges that offer a liberal arts education. Ask yourself if listing such schools answers the query implicit in the keyword search “best liberal arts colleges.” Obviously, such a query is asking for a comparison of different liberal arts colleges. To list actual liberal arts colleges, as Google does, is simply to have schools testify to their own excellence at offering a liberal arts education. And what use is such testimony? What’s needed is an outside party doing a fair-minded evaluation of the quality of different liberal arts colleges. In the past, Google searches mainly listed such independent websites for the query “best liberal arts colleges,” and the article cited from AcademicInfluence.com was for a time in the number one or two spot in the Google rankings.
Google’s power over online businesses is not just monopolistic but also extravagant. When I try to make clear to people Google’s power over online businesses, I ask them what would happen to brick and mortar businesses if the roads on which they’re located could suddenly be completely altered with the push of a button. Consider Walmart. Walmart stores tend to be placed on busy streets and near highways. But what if by pushing a button you could turn the highway next to a Walmart into an obscure dirt road? Suddenly, no one can get to it any longer, and that Walmart store is out of business. That’s the power of Google: to create digital roads that enable online businesses to thrive and then with the push of a button to destroy those roads, causing them to wither and die.
When I try to make clear to people Google’s power over online businesses, I ask them what would happen to brick and mortar businesses if the roads on which they’re located could suddenly be completely altered with the push of a button.
With physical roads, we can count on them staying in place and being maintained. A brick-and-mortar business by a physical road may suffer some loss depending on the economy as a whole or on some competitor setting up shop close by. But such a business is unlikely simply to collapse in the way that online businesses do because a Google update relegates them to oblivion. I’ve seen websites completely destroyed by Google updates. I’ve seen entire web portfolios lose 80 percent of their revenue stream overnight on account of a Google update. Google’s motto used to be “don’t be evil.” But my own sense is that its extravagant power on the web has thoroughly corrupted it. Of course, Google can always rationalize that it built the digital roads in the first place and so it can do what it likes with them. But search is now becoming a public good, and Google’s extravagant power is bringing instability to business, so much of which is now online and depends on search.
In the years since I sold my main educational website, Google has made life tougher on such websites. It is increasingly using AI-generated summaries, stuffing its search engine results pages with sponsored content, and offering sidebar knowledge panels. Consequently, organic results from keyword searches are increasingly getting lost from view. Whereas in the past you could still hope to generate income from pages that appeared even below that top ten search results, now if you’re not in the top five of the first SERP, your income from such a page will be minimal to non-existent. It’s as though Google is doing everything in its power to throttle organic traffic that would otherwise go to content sites, driving that traffic instead to their sponsored ads and thus to increasing their bottom line.
Whereas things used to feel more like a win-win with Google, now it feels more like an extortion racket. SEO sites depend for their lifeblood on search, and that means Google. Yet the irony is that Google could not be a search engine except for those sites (search engines after all need something to search). To be successful, these sites now increasingly need to pay Google directly or indirectly to get the high intent traffic from which to generate revenue.
Thus, they can pay Google directly by bidding on keywords and appearing in paid search results. But they can also pay third parties to create backlinks that Google may then credit to their sites. But there are no guarantees here. Google touts its PageRank algorithm as modeled on the citation index, where more citations to an article make it more influential. And so Google claims to value webpages and websites by the backlinks to sites from other sites, the backlinks serving the role of citations (in this regard, self-citations or internal links don’t count).
But note, Google is not to be held to its stated standards. At times, it is only too happy to violate its PageRank algorithm. Take what Google did with the CBD oil industry. Many small businesses selling CBD oil and posting articles about CBD oil played the SEO game and were able to make good money by ranking highly in Google’s search results (one of my colleagues was a player in this arena). But then Google disrupted the CBD oil industry by implementing algorithm updates (contrary to PageRank) and ad policy changes that severely restricted the visibility and advertising of CBD-related products. In 2019, Google deindexed or significantly downranked many CBD-related websites, making it much harder for consumers to find CBD businesses through organic search. As a result, numerous small independent CBD businesses saw dramatic drops in traffic and revenue, allowing only a handful of larger, well-connected companies to remain in the market.
The aftermath of the CBD oil debacle ended up indirectly affecting one of my business efforts. A few weeks ago I described the Success Portraits Personality Test on this Substack column. The test covers 19 traits across 4 work situations and is intended to be a universal personality test. In some early advertising copy, we wanted to use the term “full spectrum of personality,” only to be dissuaded because “full spectrum” was so closely associated with Google’s destruction of the CBD oil industry (“full spectrum CBD oils”) and thus it seemed best not to use the term “full spectrum,” regardless of context, because it might get us dinged by Google. Granted, this would be a very tenuous guilt by association, but it was simply not worth risking with Google. This slavish mindset has become typical of dealing with Google.
Note that Google’s disruption of the CBD oil industry is not unique. Up above I described Google’s disruption of the affiliate marketing industry. Another example is Google’s disruption of the alternative health industry. Around the same time that it was destroying the online CBD oil industry, it was also attempting to destroy the online alternative health industry. I was tempted to write “decimate” the online alternative health industry, but the old Roman practice of decimation referred to killing only one-tenth of an unruly band of Roman soldiers.
Google never simply decimates unruly websites in this sense—that’s too mild for its tastes. It knows that it has done its job right when it destroys at least ninety percent of a website’s business. This calls for a neologism. I propose nonaginate (from nonaginta, Latin for 90, indicating 90 percent; emphasis is on the first syllable). Nonaginate—hat tip to Google for inspiring the term—is thus defined as destroying at least ninety percent of a thing. Nonagination is therefore much more extreme than decimation (in decimation’s strict literal sense of only destroying ten percent). Google prefers to nonaginate sites it doesn’t like. Thus, for instance, in the alternative health space, when Google decided to go after it, it reduced organic traffic to the website of one of the key players, Dr. Joseph Mercola, by 99 percent!
How, you might ask, does Google justify downranking alternative health sites? Rather than simply admit its own bias, it institutes what it claims to be an unbiased policy. It’s the old trick of using a policy to do your dirty work, thereby leaving your hands ostensibly clean. Google uses policy as a fig leaf to cover its bias. Thus it will impose EAT (Expertise, Authoritativeness, and Trustworthiness) and YMYL (Your Money or Your Life) content standards. According to Google’s Search Quality Evaluator Guidelines, content that can affect a person’s health, financial stability, or safety—like medical advice—falls under YMYL and thus must come from sources that demonstrate strong EAT.
Because many alternative health websites, even when backed by formal medical credentials (such as Joseph Mercola’s site—Mercola is an MD), fly in the face of conventional medical wisdom, Google’s algorithms and human raters are thus, in the name of EAT and YMYL, instructed to treat them as less trustworthy or potentially harmful. Invoking policy in this way allows Google to position its suppression of such content as a matter of user protection rather than viewpoint discrimination. Google’s bias here is as real as ever, only Google has deflected it to an impersonal policy that gives an air of objectivity.
In any case, for online businesses that Google has yet to nonaginate and that want to continue to play the SEO game with Google, there are SEO/PR firms designed to get websites and webpages to rank highly with Google. (As an aside, these firms are themselves now hurting because of Google’s draconian updates, which make it harder and harder for them to help clients get and stay profitable.) A prime approach that SEO/PR firms take to help online businesses with Google is to get complimentary articles written about a website and company, and then place these articles with publications that have high domain authority. Domain authority, or domain ranking (DR), is measured by numbers from 0 to 100, 100 being best, and 90 being terrific (unsurprisingly, Google.com’s domain authority is 100).
Articles with links from high domain authority websites to your website are then supposed get you kudos with Google. These articles need not result from objective journalists taking a sincere interest in your website. This is pay to play. Thus, the SEO/PR firms, the article authors, and the newsie sites with high domain authority that host the articles all need to get paid. You can expect to pay $3,000 to $5,000 (or more) for such an article in a publication with a domain ranking at or just under 90.
Interestingly, having enough of these paid articles can help get your website/company an entry in Wikipedia. That can be very useful for your enterprise because having a Wikipedia entry gets you instant credibility (even though so much of Wikipedia is substandard and politically biased). Also, Google will take the first sentence or two of your Wikipedia entry and turn it into a knowledge panel, so that a line from Wikipedia will be the first thing people see about your website/company when they do a search for it. If all of this sounds incestuous, it is. As noted, Wikipedia has a long-standing incestuous relationship with Google. If you query some controversial topic, what Google reports about it in some AI generated knowledge statement and what Wikipedia reports about it will typically align.
But wait, there’s more. Google’s inordinate control over online businesses is matched by its inordinate control over and ability to manipulate public opinion. Let’s turn to that next.
Robert Epstein’s Search Engine Manipulation Effect (SEME)
Robert Epstein is a psychologist and senior research fellow at the American Institute for Behavioral Research and Technology. The former editor-in-chief of Psychology Today, he is known for his research on behavioral science, digital influence, and the intersection of psychology and technology. He focuses especially on human autonomy in the digital age. In recent years, Epstein has become a prominent critic of the power wielded by large tech companies, especially Google, arguing that their control over information flow poses a serious threat to democratic processes and individual freedom.
He introduced the concept of the Search Engine Manipulation Effect (SEME), which describes the ability of Google and other search engines to adjust their algorithms to shift opinions and behavior—particularly voter preferences—without users being aware of their influence. Through controlled experiments, Epstein and his colleagues have shown that undecided voters can be significantly swayed by biased search rankings, tailored autocomplete suggestions, and targeted notifications. The SEME operates subliminally and yet substantively affects voting decisions. Because the SEME is not currently subject to effective oversight or regulation, it raises important concerns about election integrity and the manipulation of public opinion.
A few case studies of the SEME are worth reviewing here. In a 2015 study published in the Proceedings of the National Academy of Sciences, Epstein and Ronald E. Robertson conducted five experiments involving over 4,500 undecided voters in the United States and India. They found that presenting search results favoring a particular political candidate could shift voting preferences by more than 20 percent, with some demographic groups experiencing shifts up to 80 percent. Notably, the manipulation was often undetected by participants, highlighting the subtle yet powerful influence of search engine rankings on democratic processes.
In a subsequent 2017 study, Epstein and his team replicated the SEME findings and explored interventions to mitigate its effects. They discovered that alerting users to potential bias in search rankings reduced the shift in voting preferences from 39.0 percent to 22.1 percent, and more detailed alerts further reduced it to 13.8 percent. However, the only method that completely eliminated the effect was alternating search results to provide balanced exposure, suggesting that search engine algorithms need to be carefully managed to preserve electoral integrity.
As still another example in this vein, a 2024 study published in PLOS ONE by Epstein and Alex Flores introduced the Video Manipulation Effect (VME), examining how biased video content ordering on platforms like YouTube can influence political opinions and voting preferences. Using a YouTube simulator, they conducted two randomized, controlled, double-blind experiments with 1,463 eligible U.S. voters. The findings revealed that when video sequences favored a particular candidate, voting preferences shifted between 51.5 and 65.6 percent overall, with some demographic groups experiencing shifts exceeding 75 percent. This study underscores the significant impact that algorithm-driven content recommendations can have on public opinion, especially given that a substantial portion of video consumption is guided by such algorithms.
As it is, YouTube is widely considered to be the world’s second-largest search engine after Google, primarily due to its massive user base, the volume of searches conducted on the platform, and the scale of content available. With over 2.5 billion logged-in users per month and more than a billion hours of video watched daily, YouTube functions as a search engine for video content, making it a critical platform for discovery and information. And as noted above, YouTube is the second most popular website by domain, after Google. YouTube’s search functionality is integral to how users discover and consume content, and it remains a dominant force in the search ecosystem, second only to Google itself. And of course, Google owns YouTube.
Epstein’s efforts to curb Google’s ability to manipulate public opinion and voter preferences is wide-ranging, involving a number of initiatives and websites, as detailed in the interview with Epstein by Robert F. Kennedy Jr. and Amaryllis Fox noted at the start of this section. At the heart of Epstein’s efforts to rein in Google is monitoring Google’s day-to-day behavior as it engages different internet users. As it is, Epstein’s efforts to monitor Google have faced significant logistical and financial challenges, particularly in building a politically balanced, nationwide network of “field agents” or “watchdogs” whose digital experiences with Google can be captured and analyzed in real time.
Because Google customizes its content per user and doesn’t archive the ephemeral data it delivers (such as autocomplete suggestions, homepage messages, and video recommendations), Epstein’s team must observe and preserve these interactions directly on users’ devices. Recruiting such agents is costly—$25/month per person—and the project has grown to 13,000 monitors in all 50 states, requiring over $325,000 per month in ongoing expenses, or nearly $4 million annually. This financial burden necessitates ongoing public support, and Epstein has urged individuals to sponsor monitors to sustain the program’s reach and effectiveness.
Beyond funding, Epstein faces the problem of infiltration: he reports that when calls have gone out for volunteers, Google has sent fake participants to sabotage the monitoring effort. To counter this, he avoids accepting open volunteers and instead uses secure recruiting, strict vetting, and non-disclosure agreements to prevent shills from contaminating the dataset. Epstein’s web initiatives that spearhead this operation include TechWatchProject.com, AmericasDigitalShield.com, and MyGoogleSearch.com. America’s Digital Shield provides a public dashboard where users can view real-time search-engine data trends. Epstein’s goal is to create a robust, legally admissible archive of Google’s algorithmic manipulations that can be used by legislators, attorneys general, journalists, and public advocacy groups to pressure Google into ceasing election interference and content manipulation.
To combat Google’s undue influence, Epstein makes both personal and public policy recommendations. On the personal front, in his article “Seven Simple Steps Toward Online Privacy,” Epstein outlines a strategy to protect personal privacy online and to reduce the influence of surveillance-based tech companies, particularly Google. He begins by noting that he has not received a targeted ad since 2014 due to his rigorous privacy practices. Epstein warns that Google’s suite of tools — Gmail, Chrome, Search, Android, and Google Home — are not free services but surveillance instruments designed to gather personal data for behavioral profiling and ad targeting. To counter this, he recommends abandoning Gmail in favor of encrypted alternatives like Protonmail, replacing the Chrome browser with Brave, and using Brave Search instead of Google. He also advises avoiding Android devices due to their constant offline tracking, and urges users to discard Google Home devices, citing their ability to eavesdrop even when idle.
Beyond these primary tools, Epstein suggests several additional privacy-preserving practices. He advocates clearing cache and cookies regularly to remove tracking scripts, using a reliable VPN like NordVPN to mask internet activity, and adopting the Signal app for secure messaging and calls. For group video conferencing, he recommends BraveTalk as a secure alternative to Zoom and Skype. He underscores the importance of systemic awareness by promoting platforms like RestorePrivacy.com, which offer vetted lists of alternatives to major surveillance-based tech services. Epstein’s overarching message is that users should opt out of exploitative platforms wherever possible and actively reclaim their digital autonomy — not through apathy or resignation, but through concrete, available, and affordable alternatives.
On the public policy front, Epstein offers a variety of recommendations. Here are some of his more salient public-facing recommendations:
Independent Monitoring System—Surveillance of Big Tech Manipulation
Epstein emphasizes the importance of creating large-scale, independent systems that monitor and record the ephemeral, personalized content Google delivers to users—content that otherwise disappears without a trace. As noted above, his current monitoring network, America’s Digital Shield, collects search suggestions, homepage messages, and video recommendations from over ten thousand registered voters across the U.S. This data allows the public and legal authorities to identify bias, voter manipulation, and censorship in real time and retroactively.Real-Time Public Exposure (“Sunlight”)—Using Transparency as a Deterrent
Following Justice Brandeis’s principle that “sunlight is the best disinfectant,” Epstein argues that publicly exposing Big Tech’s manipulations is a powerful deterrent. It’s one thing to monitor Google’s shenanigans. It’s another to make their activities visible to the public. Public exposure can shift corporate behavior without requiring legislation or litigation. Epstein’s monitoring system is designed to generate continuous, court-admissible data for exactly this purpose.Make Google’s Search Index a Public Commons—Mandated Public Access to Core Infrastructure
Epstein suggests that Google’s search index—its database of the web—should be declared a public commons, allowing competitors to build their own search engines using that data. This would foster algorithmic diversity, end Google’s monopoly on search, and restore competitive innovation. He argues there is legal precedent for declaring essential services public goods, especially when abuse becomes systemic.Regulatory Oversight and Transparency—Mandating Algorithmic Transparency
Epstein advocates for governmental regulations requiring tech companies to disclose their algorithmic practices, especially concerning election-related content. He suggests that platforms should be transparent about how they personalize content and ensure that such practices are not partisan. His research supports legislative efforts aimed at enforcing transparency and accountability in algorithmic decision-making. In aid of this transparency, Epstein encourages decentralized and open-source platforms to counteract Google's centralized control over information.Banning Surveillance Capitalism—Outlawing User Data Exploitation
Though he considers it politically unlikely in the U.S., Epstein proposes banning Google’s surveillance-based business model, which treats users as products and monetizes their behavior and data. He cites Apple CEO Tim Cook’s support for outlawing this deceptive and manipulative approach to monetization. Epstein believes it is fundamentally incompatible with democratic norms.Rejecting Ineffectual Antitrust Action—Avoiding Misdirected Legal Remedies
Epstein warns that traditional antitrust lawsuits will not stop the real threats posed by Google—namely surveillance, censorship, and manipulation. Breaking up parts of the company won’t affect their core influence over search and content delivery. He argues Google’s legal teams even manipulate regulatory focus to avoid more threatening lines of inquiry.Public Awareness Campaigns—Educating the Public on Algorithmic Influence
Epstein emphasizes the importance of raising public awareness about Google's capacity to subtly influence opinions and elections through algorithmic manipulation. He engages in extensive public outreach, including media appearances and maintaining websites like MyGoogleResearch.com, to disseminate his findings and educate users on recognizing and resisting such influences, bypassing Google in favor of less biased alternatives.Whistleblower Protections—Safeguarding Insiders Who Expose Misconduct
Recognizing the value of insider information, Epstein wants to see legal protections for whistleblowers within tech companies. He highlights cases like that of former Google employee Zach Vorhies, who leaked documents indicating internal bias. In endorsing Vorhies’s book Google Leaks, Epstein wrote: “I know a lot of creepy things about Google, but I was shocked by some of the revelations that turned up in the 950 pages of documents Vorhies extracted from the company. We all need to understand how dangerous this company is.” Epstein cites such cases to underscore the importance of safeguarding workers in Big Tech who reveal its manipulative practices. Protecting whistleblowers will help to uncover and redress unethical behaviors by Big Tech.International Pressure (e.g., EU Intervention)—Global Regulatory Initiatives and Cooperation
While skeptical of U.S. political will, Epstein believes the European Union might take decisive action against Google’s influence—such as declaring Google’s index a public utility—which would have global implications. He sees EU regulations as potentially stronger and more enforceable than US regulations. He notes, however, that even strong EU laws require monitoring to ensure compliance. Epstein suggests that international cooperation is essential to establish standards and regulations that limit Google’s global influence on public opinion and elections. In advocating for a unified regulatory approach, Epstein wants to ensure that Google's operations are subject to global accountability.
Robert Epstein has focused on Google’s political malfeasance. Earlier in this article I focused on Google’s economic/business malfeasance. Let next pull these two strands together.
Conclusion: What Is to Be Done?
In 1902, Vladimir Lenin published What Is to Be Done? This political tract reconceptualized the revolutionary socialist movement in Russia. He argued that spontaneous worker uprisings were insufficient to bring about socialism and insisted on the need for a highly disciplined, professional revolutionary party to lead the working class. Lenin emphasized the role of “vanguard” intellectuals in raising political consciousness, guiding the proletariat beyond trade-union demands to a full revolutionary struggle against the Tsarist regime.
I wonder if something similar is needed for a full revolutionary struggle against the Googlist regime. First off, let’s be clear that Google is highly unlikely to admit doing anything unfair or underhanded in its search rankings. Google, for instance, dismisses the the work of Robert Epstein as “nothing more than a poorly constructed conspiracy theory.” Defending themselves to the Washington Post, Google adds, “We have never re-ranked search results on any topic (including elections) to manipulate political sentiment. Moreover, we do not make any ranking tweaks that are specific to elections or political candidates, period. We always strive to provide our users with the most accurate, relevant answers to their queries.”
NOTE: Use of the term “conspiracy theory” to disparage a position can be credited to the CIA as it attempted to deflect attention about its complicity in the assassination of JFK. The CIA memo of 4/1/1967 titled “Countering Criticism of the Warren Report” includes, “Conspiracy theories have frequently thrown suspicion on our organization, for example by falsely alleging that Lee Harvey Oswald worked for us. The aim of this dispatch is to provide material for countering and discrediting the claims of the conspiracy theorists, so as to inhibit the circulation of such claims in other countries.”
As I read these denials by Google, I’m reminded of the famous scene in the film A Guide for the Married Man where an experienced philanderer (Robert Morse) teaches a would-be philanderer (Walter Matthau) to deny any and all infidelity even if caught in flagrante. Google has internalized this lesson. Here is the scene:
Regarding Epstein’s research, let me urge readers who think he may be blowing smoke to look at it and then come to their own decisions about it (I provided links to his research in the previous section). Don’t take my word for it, but also don’t take Google’s word for it. Some have criticized Epstein’s work for extrapolating too far beyond his data. But even if the effects are not as extreme as he claims, he would at worst be partly overstating Google’s influence over voting preferences and democratic processes—that influence would still be real and palpable.
Epstein’s work on unmasking Google has been as an outsider. He operates as a reverse engineer who scrutinizes what Google is doing in public and from there draws conclusions about how Google is distorting search results to shape people’s preferences. But Google has also faced unmasking from inside, notably by former Google employee turned whistleblower Zach Vorhies (mentioned in the last section). In his book Google Leaks: A Whistleblower’s Exposé of Big Tech Censorship, Vorhies details the political bias that infects Google.
In the book, Vorhies recounts the moment it became clear to him that Google was betraying its objectivity in internet search. That happened in the aftermath of the 2016 US presidential election. In response to the election result, Google leadership determined that they would do everything in their power to prevent populist and low-information voters from deciding future presidential elections. To that end, they would implement “machine learning fairness” in Google searches to steer users toward “good information” (promoting the right sources holding the right political views), and effectively censoring “bad information.” This sort of paternalistic nudging would take place with most users remaining completely unaware (compare Richard Thaler and Cass Sunstein’s Nudge).
Google leadership became convinced that the 2016 presidential election was unfair because of fake news (mis-, mal-, and disinformation). Consequently, Google needed to provide an appropriate counterbalance. To dispassionate eyes, “machine learning fairness” is simply putting your hand on the scale to achieve the balance you want. In effect, machine learning fairness is a way for Google to override its basic algorithm. Google’s basic algorithm is PageRank, which is supposed to order websites in response to user queries based on the quantity and quality of links to those sites. But Google doesn’t just unleash its algorithm and let it have its way. Instead, it adds overrides that deliver results at variance with the algorithm when the algorithm produces results that Google doesn’t like.
According to Vorhies, one way Google overrides its algorithm is by instituting blacklists that block results when some term on the blacklist appears in a search query. Vorhies provides pages of blacklisted terms that he was able to find in Google internal documents while he still worked at Google. Another effective way to override Google’s basic algorithm is to add a layer of machine learning that simply skews results in favor of some preferred ideology or business outcome (“machine learning fairness”).
Still another approach to overriding Google’s algorithm is simply to impose a manual penalty, downgrading individual sites so that they rank poorly in Google search. Taken to its logical conclusion, such a manual penalty can mean deindexing a site so that it doesn’t even appear on Google, thus banishing the site to oblivion. Now I don’t mean to give the impression that such manual penalties are never warranted. Obviously spammy sites and sites depicting and advocating extreme violence would qualify for deindexing. But what we’re talking about is nothing like this but widely held viewpoints that Google is deliberately suppressing.
To the computer-science purist, what I’m calling an override is of course ultimately also part of the Google algorithm. Even manual overrides need to be entered as part of Google’s codebase. Still, these overrides are essentially addenda and exceptions to what would otherwise be a conceptually clean algorithm that handles search queries fairly without built-in bias. These overrides can be thought of as epicycles in the old Ptolemaic cosmology, where as observations continued to pile up and contradict Ptolemy’s theory, epicycles within epicycles (exceptions within exceptions) needed to be added to the theory (in this case, to Google’s algorithm).
So then, what are my recommendations for reining in and reforming Google? First off, let me say that I agree with all of Robert Epstein’s recommendations. Certainly, his monitoring of Google through “field agents” or “watchdogs” is invaluable at spotlighting Google’s shenanigans. Also, his recommendations, insofar as they can be implemented, will help to reform Google. That said, I suspect few people will want to endure the rigors of bypassing Google in their day-to-day activities.
Google has inveigled itself into most aspects of our internet life, and it takes concerted effort to extricate oneself from it. Take Gmail. Epstein recommends going with Protonmail in its place. I have colleagues who take seriously removing themselves from Google influence and thus subscribe to services like Protonmail and Hushmail. But it takes added effort and cost to use such services. Google’s suite of offerings are more convenient to use. What’s more, many of your correspondents, if you are with such services, will be using Gmail, so your emails will most likely still be in the Google system, even if one step removed.
Speaking for myself, despite my intimate knowledge of Google’s shenanigans, I still generously use Google’s services (Gmail, Google maps, YouTube, etc.). True, I pride myself on being aware of its tricks and thereby rationalize my continued use of Google. Moreover, I’m able to use alternative services when I don’t want Google to track me (such as the Tor browser). Maybe I’m deluding myself. Maybe Google is getting information from me that it will effectively use against me.
But excising Google from my internet life at this point seems extreme. Also, I don’t see any efforts I make at distancing myself from Google as making a palpable difference in reforming Google since the percentage of people committed to sidelining Google seems small. Nor do I see an emerging groundswell of people who will jump ship from Google in mass. So while I cheer Epstein in his online privacy recommendations contra Google, I don’t see these recommendations as constituting a solution to the problem of Google bias in search.
Epstein’s public-facing recommendations likewise all have merit, but their practicability remains for me in question. Some of his recommendations ultimately come down to raising public awareness about Google’s machinations. This is important work. But given the extent to which Google controls public awareness, raising public awareness about its bias is likely to have limited effect, especially as Google has so many goodies (Gmail, maps, YouTube) at its disposal with which to bribe users.
Looking to new laws to rein in Google could have merit if the laws can in fact be enacted and enforced. But what would those new laws be and what effect would they have in actually making Google more honest? Enhanced whistleblower protections, for instance, will only go so far. Vorhies’s depiction of Google from the inside shows an echo chamber at Google in which most employees are willing to drink the Kool-Aid and the rest are happy to pretend that they did. Moreover, Vorhies’s book Google Leaks, though causing a stir when it was published in 2021, seems not to have made much of a dent in reforming Google. True, it could be argued that Google would be an even worse actor without that book and its revelations. But even with it, Google shows no signs of repentance. Institutions with the size and power of Google can absorb plenty of criticisms and just keep chugging along.
What about laws that Epstein would like to see enacted such as treating Google as a public commons and making its algorithm transparent? Such laws could have benefits, but only if they could be used to compel Google to improve its behavior in clearly specified ways. I could, for instance, imagine the Google algorithm being opened to the public in such a convoluted form that at the end of the day it might still be difficult to hold Google’s feet to the fire about correcting its bias. I’m reminded of the small law firm that takes on a big company in a suit and during discovery asks for certain information, only to be inundated with a warehouse of bankers boxes, which effectively hides the desired information like a needle in a haystack. I could see Google doing much the same.
In any case, Google’s lobbying arm is ferocious, so it’s hard to imagine Congress taking on Google in a serious way, as in advancing Epstein’s public-facing recommendations. True enough, we see members of Congress discussing the break up of Google’s monopoly. But how does one break apart Google search? It’s all one big integrated algorithm (even if it is a pastiche with lots of jury-rigged overrides). This is not like breaking up AT&T forty years ago, when regions of the country could be assigned their own piece of the AT&T pie. In fact, I would say that everything I’ve seen from Congress in supposedly holding Google’s feet to the fire is mainly street theater, as in, “See, we’re aware of the problem, but no, we’re not going to do anything effective to resolve it.”
So where does that leave us? I would say creating new laws to regulate Google is probably a dead end. Better, in my view, is to take advantage of existing laws, regulations, and remedies. Specifically, a big-tech company like Google that claims Section 230 and DMCA protections (as discussed earlier in this essay) could be deemed in violation of these protections if it actively moderates or curates content in ways that suggest editorial control, thus acting as a publisher rather than a neutral platform, or if it fails to promptly remove infringing content after receiving valid DMCA takedown notices.
The Federal Trade Commission (FTC) could then investigate such unfair practices, the Department of Justice (DOJ) could pursue criminal copyright infringement, the Copyright Office could challenge DMCA protections, and a president unhappy with Google could issue executive orders that attempt to remove such privileges entirely. Remedies could range from injunctions and monetary damages (statutory damages up to $150,000 per willful copyright infringement) to loss of safe harbor, potentially exposing Google to sprawling liability. Private lawsuits from rights holders or affected users could also seek compensatory or punitive damages, while antitrust scrutiny from the FTC or DOJ might address monopolistic content moderation practices. And finally, executive orders that treat Google as a threat to election integrity, as a biased form of search, and as a disruptive force in American business could give all such efforts teeth.
The US government has many levers it can pull to rein in Google. If I were to make a prediction, I see the remedy to Google’s bias coming through the courts. A generation or two ago, corporate powerhouses included the cigarette companies. They had successfully withstood lawsuit after lawsuit on the dangers of cigarettes—until one lawsuit finally succeeded, thereby puncturing their myth of invincibility. That story is masterfully recounted in the 1999 film The Insider. I can see Google facing similar challenges in court, perhaps recounted with its own Hollywood movie. Consider the following injunctions that courts might impose on Google:
Default to Unadapted Search. Make the default, when users do a Google search, that the search is the same for everyone. In other words, make the default unadapted search, so that what you see, everyone else sees. Likewise for autocomplete. Users would then have to opt in to experience adapted search. Unadapted search would thus set a baseline for the rest of Google search. It would tell us what Google sees when it is not trying to influence individual users.
Organic Results First. Right now, Google inundates its search results with ads, knowledge panels, and AI generated summaries. All this is secondary to the organic results that direct users to the supposedly best places on the web to answer their queries. This injunction would put organic results first and everything else below it (“below the fold”). Google’s ad revenues would take a hit, but this move would curtail Google’s parasitizing of the web, repackaging its content without its creators getting proper credit.
Warning Labels. We put warning labels on cigarettes, household cleaners, and medications. To the degree that Google is egregious in its bias, it would be appropriate to enforce a warning label at the top of every Google search page, such as, “WARNING: Google search is biased. To avoid being misled, consult other sources.”
User Disclosure and Deletion. Google maintains what is essentially a digital dossier on all its users. Currently, we don’t get to see what’s in these dossiers. Google, as a matter of transparency, needs to make its user dossiers available to their respective users in a clear readable form. Moreover, users should have the option of having their Google digital dossiers deleted in an enforced act of forgetting. Many people want to turn over a new leaf, and Google needs to honor that.
Put Algorithm Updates on Staging. Right now, when Google does an update, it simply springs the update on the world. Businesses are then often left reeling as they find their websites downranked. Because businesses depend so much on Google, major Google updates should be put on a staging site (e.g., staging.google.com) where their impact could be assessed by users as well as by governmental agencies, such as the Commerce Department. Google updates would therefore be subject to a deliberation and debate phase, and their likely impact could not just be reviewed but also measured. I suspect every major Google update has an economic impact in the tens of billions of dollars.
Illegitimate In-Kind Political Contributions. If Google is algorithmically favoring, say, one presidential candidate over another, such an act would amount to an in-kind contribution, especially if it was deliberate, coordinated, and had a measurable impact on the election. Under federal campaign finance law, corporations are prohibited from making direct contributions to candidates, including non-monetary (in-kind) contributions like services or resources that benefit a campaign. If Google's internal staff intentionally altered algorithms to sway voter behavior, the labor of its engineers and the platform’s algorithmic reach would constitute valuable assets provided to the campaign. A court could then impose penalties, require disclosure, or pursue enforcement if Google’s actions here were not publicly disclosed or exceeded campaign contribution limits.
Compliance Through Shutdown. If Google is doing something illegal, a judge could require compliance to remedy the wrong. Often compliance takes the form of fines, so that every day in violation of compliance is a day that Google pays a fine. But Google makes so much money that fines typically don’t mean anything to it—fines are just a cost of doing business. What would really encourage compliance, however, is if Google had to shut down until it met compliance. Google cannot afford a shutdown. So much of its business depends on its reliability. If Google search might disappear, if Gmail might go down—even if for a brief time—users will vote with their feet to look for other equivalent services, undermining Google’s dominance.
This list is not meant to be exhaustive, but it is representative of concrete steps that could be taken to reform Google and where its reformation would be verifiable. I encourage others to add to this list.
In closing this essay, let me offer a broader perspective on Google’s evilization. Ultimately, what is responsible for Google’s turn to the dark side is its delusional belief that it is on the side of goodness and light and so must do everything it can in their service. The problem is that ever since humanity’s fall in the garden by eating the fruit of the tree of the knowledge of good and evil, even though we have since then experienced good and evil, we have been less successful at discerning between the two. Simply put, we often can’t tell the difference between what is actually good and what is actually evil.
Not having taken this lesson to heart, Google thinks it knows the truth about what’s good and evil, and so casts itself as a valiant defender of good that must do everything in its power to help good along. Google forgets Blaise Pascal’s (1623–1662) admonition in the Pensées that people “never do evil so completely and cheerfully as when they do it from religious conviction.” Pascal wrote in an age of faith. But in this secular age, his admonition can be recast as people never doing evil so completely and cheerfully as when they do it from ideological conviction. Pascal’s point holds regardless of whether the ideology is religious or secular, as in the case of Google.
If you see yourself as on the moral high ground, then as an information company, your job will be to distinguish between good information and bad information. The good information must be highlighted and the bad suppressed. Bad information then gets demonized with labels such as malinformation, misinformation, and disinformation. But the problem with calling something bad information is that you may be giving it that label because you yourself have bought into bad information. And who’s to say what is good and bad information?
In posing this question, I’m not espousing epistemic or moral relativism. I’m simply underscoring human fallibility. All of us get a lot of things wrong. An information company like Google is supposed to be a platform, not a publisher. It is therefore supposed to provide an impartial forum for a diversity of views. Consequently, it should not be deciding between good and bad information. To see how Google failed spectacularly on this point, in 2021, when queried about the origin of SARS-CoV-2, Google's search results promoted the natural spillover theory, which claimed that the virus originated from animal-to-human transmission, likely involving bats. At the time, the lab leak theory was labeled as a “debunked conspiracy theory.” And yet the lab leak theory was ultimately vindicated.
Google claims that it is doing right by suppressing mal-, mis-, and disinformation. In fact, its problem is not with any of these forms of information. Its problem, rather, is with mono-information, information that’s so focused on one thing that it misses other competing items of information—like being so focused on an individual tree as to miss the forest. The answer to bad information is therefore not good information because often we don’t know which is which. Instead, the answer to bad information is more information.
In the spirit of our First Amendment, information needs to be set free. The truth can take care of itself provided it has open access to the marketplace of ideas. Google, unfortunately, tries to control which vendors are allowed in that marketplace. In so doing, it has proven itself a wretched caretaker of the truth. Google suppresses pluralism in perspectives. Its problem isn't misleading information per se but rather enforced singularity of narrative. The First Amendment guarantees freedom of thought and expression. The alternative to the First Amendment is a narratocracy in which a society’s commanding institutions determine what stories are allowed and disallowed.
Narratocratic elitists who control such institutions label those who reject their considered opinions as “uninformed” or “low-information.” Invariably, however, it is these elitists themselves who are guilty of low information because they artificially limit the information they are willing to make available to the people, thereby preventing the full range of information from being known and considered. There’s still hope for Google. Its motto remains in place: “to organize the world's information and make it universally accessible and useful.” The challenge is to get Google to live up to its motto. If the past is any indication, getting Google to that place will require muscular persuasion.
Thanks Bill.
In this detailed essay you've helped confirm my very general, simplistic thinking re this dilemma. Your mention of cigarettes and AT&T---the debunking of "the science" behind cigarettes in much of the 20th century ("nine out of ten doctors prefer Camel") and the bust up of Ma Bell---gives historical hope that Google will eventually be brought to heel at the First Amendment.
Intriguing analysis of what I have used so often for years. However, lately, I have switched to Grok for very good responses to technical questions that Google could rarely answer until recently. For example, Grok provided nearly instant answers to my very specific questions about early Spanish missions in what became Texas (for my new novel). Google's essay-style response was much improved over its past performance but not nearly as good as Grok's. This brings up another question: Will students begin using Grok's well organized, nicely written responses for assignments that require written essays? I don't want to be a passenger on a jet with a pilot or a patient of a surgeon who obtained most of their degrees by parroting what they found on Grok.