Google talks too damn much

By Xzetera

“The real psychological truth is this: If you’ve got nothing to hide, you are nothing.”
― Shoshana Zuboff, The Age of Surveillance Capitalism, 201


Table of contents


Google talks too much

Google has been making headlines again, and it’s not for pursuing their stated mission of remaining “committed to significantly improving the lives of as many people as possible.” The alliance of Google and Amazon workers, No Tech for Apartheid, organized sit-ins at the home of Google Cloud’s CEO, and at Google Cloud’s New York City and Seattle offices to publicly demand Google Cloud and Amazon Web Services to withdraw from Project Nimbus- the $1.2B contract with the Israeli government. This took place amid a rising sense of outrage; the day after tax day, which saw a coordinated international economic blockade aimed at disrupting the national economies that facilitate the occupation and genocide of the Palestinian people.

While it may not be obvious, considering how popular their products are, but Google is a company with controversy. Google’s business model has propelled them to be a dominant force, which impacts the privacy and security of members of political and social movements. As a pioneer of surveillance capitalism, Google has positioned itself as a symbiotic ally to law enforcement and intelligence agencies worldwide.

As the amount of data we generate compounds alongside the development of the Internet of Things, greater inter-connectivity, and cloud computing, more personal data exists about us than ever before– and it grows by the minute. Avoiding and abandoning Google services denies Google and the government from obtaining a treasure trove of personal, potentially incriminating data. Using the right tools, activists can further protect themselves from government overreach and repression.

Project Nimbus

According to TIME magazine, The Google workers involved in the sit-ins lost their jobs to proclaim that they won’t take part in constructing “powerful AI and cloud computing tools [that they worry] could be used for surveillance, military targeting, or other forms of weaponization.” Google Cloud’s spokesperson has insisted that Project Nimbus is not connected to weapons or intelligence services, and that all of their customers must abide by their terms of service, which forbids use that would “violate the legal rights of others, or engage in ‘violence that can cause death, serious harm, or injury.'” But Israel’s finance ministry explicitly stated in 2021 that Project Nimbus will be used by the ministry of defense. The Intercept obtained training materials for the project that indicated Project Nimbus would provide the “full suite of machine-learning and AI tools available through Google Cloud Platform.[…] [and] would give Israel capabilities for facial detection, automated image categorization, object tracking, and even sentiment analysis that claims to assess the emotional content of pictures, speech, and writing.”

 

Google, our favorite snitch?

In line with the Boycott, Divest, and Sanctions movement, consider boycotting Google products and services such as Search, Gmail, Google Docs, Google Cloud Platform, and Google Drive. In today’s world, data is money. You may not pay for Google’s services, but make no mistake, by using their platforms, you generate data that will be. But if you need another important reason to quit Google, they will snitch on you at the drop of a hat. According to their public transparency report, in the first half of 2023, Google received government requests for information for 436,326 accounts. Google elected to provide information for 80% of the requests. This figure does not include subpoenas by private parties in civil lawsuits.

Google Transparency Report

Two main Google providers exist:

  • Google LLC. Based in the US, operating under US laws.
  • Google Ireland Limited. Based in Ireland, operating under Irish law. Provides Google services to the European Economic Area, and Switzerland.
  • Each provider has unique data privacy laws that they must adhere to, for their respective regions.
  • We will be focusing on Google LLC, based in the US.

Google LLC (USA) faces these types of government requests:

  • From US govt agencies in civil, administrative, and criminal cases.
  • In order to follow the Electronic Communications Privacy Act (ECPA), and the Fourth Amendment of the US Constitution, authorities must at least:
    • In all cases:  Issue a subpoena to compel disclosure of basic subscriber registration information and certain IP addresses.
    • In criminal cases: Get a court order to compel disclosure of non-content records, such as the To, From, CC, BCC, and Timestamp fields in emails.
  • Get a search warrant to compel disclosure of the content of communications, such as email messages, documents, and photos.
  • From US govt agencies that involve national security.
  • US government may use a National Security Letter (NSL) or one of the authorities granted under the Foreign Intelligence Surveillance Act (FISA) to compel Google to provide user information.
    • An NSL doesn’t require judicial authorization. It can only be used to compel to provide limited subscriber information.
    • FISA orders and authorizations can be used to compel electronic surveillance and the disclosure of stored data, including content from services like Gmail, Drive, and Photos.
  • From govt authorities outside the US.
  • Google may provide user information if doing so is consistent with all of the following:
    1. US law, which means that the access and disclosure is permitted under applicable US law, such as the Electronic Communications Privacy Act (ECPA).
    2. Law of the requesting country which means that we require the authority to follow the same due process and legal requirements that would apply if the request were made to a local provider of a similar service.
    3. International norms which means we only provide data in response to requests that satisfy the Global Network Initiative’s Principles on Freedom of Expression and Privacy and its associated implementation guidelines.
    4. Google’s policies which include any applicable terms of service and privacy policies, as well as policies related to the protection of freedom of expression.
  • Requests for information in emergencies:
    • “If we reasonably believe that we can prevent someone from dying or from suffering serious physical harm, we may provide information to a government agency — for example, in the case of bomb threats, school shootings, kidnappings, suicide prevention, and missing persons cases. We still consider these requests in light of applicable laws and our policies”
  • When they receive a request from a government agency, Google sends an email to the user account before disclosing information. If the account is managed by an organization, they give notice to the account administrator.
  • Google won’t give notice when legally prohibited under the terms of the request. We’ll provide notice after a legal prohibition is lifted, such as when a statutory or court-ordered gag period has expired.
  • We might not give notice if the account has been disabled or hijacked. And we might not give notice in the case of emergencies, such as threats to a child’s safety or threats to someone’s life, in which case we’ll provide notice if we learn that the emergency has passed.

Source: Google Transparency Report [https://transparencyreport.google.com/]

Totally optimized tattling

The volume of information that is shared with law enforcement and other parties is unprecedented. From their humble beginnings to their current worldwide spread of light-speed data centers, Google has become perhaps the most prolific informant that exists, along with Amazon.

Originally, search engine results were limited to URLs that humans had to enter in manually. Page rankings were calculated by how many times the search terms appeared in that webpage.

Google’s initial contribution to this field was to program search algorithms that would surf the web dynamically. This process is called web-crawling, or web-spidering. Each webpage encountered is automatically indexed — entered into a huge collection of webpage URLs with associated metadata. To monetize their technology, Google began collecting large amounts of usage data linked to individual IP addresses (internet addresses), providing a mechanism for targeted advertising. To scale their business they used profits to buy more computing power, which would then facilitate the release of new products and services that generate more data, which in turn generates more revenue and allows the cycle to feedback towards higher and higher profitability and scale. Scaling Google services and products means buying equipment (and increasingly  manufacturing equipment) in bulk, at a scale that allows for future expansion. In scaling, an excess of computing power is bought that is not yet being used. This brings us to another huge revenue stream for Google and other big tech firms. Owning a constant surplus of computing resources puts Google in a position where it can sell access to their computing power in the form of cloud computing.

Google’s feedback loop of profit and power

 Surveillance capitalism

Google’s business model is a feedback loop of power and profit that has grown Google into one of the most powerful companies in the world. Driven by the risks and rewards of the market, instead of government funding and bureaucracy, it is through the cold, profit-driven logic of capitalism that Google invented the economic system of surveillance capitalism:

In our time, surveillance capitalism repeats capitalism’s “original sin” of primitive accumulation. It revives Karl Marx’s old image of capitalism as a vampire that feeds on labor, but with an unexpected turn. Instead of claiming work (or land, or wealth) for the market dynamic as industrial capitalism once did, surveillance capitalism audaciously lays claim to private experience for translation into fungible commodities that are rapidly swept up into the exhilarating life of the market. Invented at Google and elaborated at Facebook in the online milieu of targeted advertising, surveillance capitalism embodies a new logic of accumulation. Like an invasive species with no natural predators, its financial prowess quickly overwhelmed the networked sphere, grossly disfiguring the earlier dream of digital technology as an empowering and emancipatory force. Surveillance capitalism can no longer be identified with individual companies or even with the behemoth information sector. This mutation quickly spread from Silicon Valley to every economic sector, as its success birthed a burgeoning surveillance-based economic order that now extends across a vast and varied range of products and services.

Shoshana Zuboff,  Surveillance Capitalism and the Challenge of Collective Action. Sage Journals, 2019, [https://doi.org/10.1177/1095796018819461], Internal citations omitted.

Surveillance capitalism is intertwined with the surveillance state in a mutually-beneficial relationship. The surveillance state of course builds infrastructure, legal frameworks, and other affairs of state that facilitate business. It could be assumed that surveillance capitalists rely on the state more than the surveillance state relies on them, but this would not account for the degree that the two depend on each other.

This is a critical point to hit home:

Google’s global dominance in big data uniquely enables it to assist government surveillance investigations. Through ingenuity, market dominance, and monopolization, they are one of the most comprehensive and efficient surveillance apparatuses on Earth. While Google’s motivation is profit, the fruits of surveillance capitalism are a gold mine for law enforcement and intelligence agencies. Today, these organizations rely on each other and hold many shared interests, further blurring the lines between government and corporations. The State, Google, Facebook, Amazon, and Microsoft functionally combine into an unprecedented global dragnet.

Having successfully monetized their user data, Google established means for users to enthusiastically provide them with even more data. Enter the parade of cheap and free Google products and services:

Pictured is the complete Google ecosystem. 

 

 “Googlization” and the Google ecosystem — It’s a trap!

Google began to make many of its new products and services available for free or very cheap. These products often offer something completely new and revolutionary — for many people, their first experience using a cloud computing product. The rapid proliferation of Google services expanding into new markets and contexts became so recognizable that the term “Googlization” emerged (Siva Vaidhyanathan, 2012),  and Google leveled down to meme status.

 

Pictured: Admiral Ackbar from Star Wars

Pictured: Admiral Ackbar from Star Wars

Docs, Translate, YouTube, Pay, Chromecast, Maps, and Earth are all incredibly powerful and useful examples of Software as a Service (SaaS) that have become household names. The YouTube platform encouraged a democratic perspective on who publishes information and media to the internet. Most people can remember the awe they felt when they used some of these services for the first time — navigating random streets in a distant city with Street View; reading foreign-language content in English via Translate; or using GPay for purchases using your cellphone, without even reaching for your wallet. It is no surprise that many of Google’s products became very popular, generating absolutely massive user bases.

The convenience and ubiquity of the ecosystem draws in users as well. There is often pressure to stick to the ecosystem’s services when a team collaborates on a project. A shared medium needs to be chosen (like a specific word processor, or Google Docs, for example), and if most members of the team are using a specific tool, that is probably what the collaborators will use. Chances are today that most of the collaborators are using Docs, so the popularity of Google services itself draws more users into their ecosystem. Digital ecosystems with the most products and services have a huge advantage, limiting the other options tremendously. Apple and Google are the main choices, and this lack of competition further draws in users due to lack of other options. The nature of the ecosystem itself means users who deviate from their chosen ecosystem will be missing out on a lot of the functionality, as the native applications are designed to interface with other applications within the ecosystem.

The ecosystem also provides the comfort of not having to remember dozens of usernames and passwords unique to each service through a process called Federated Identity Management (FIM).

One account to rule them all

Federated identity uses a single set of credentials for multiple domains. Simply put, it allows a user’s Google account to register and login to different platforms and applications, including not only the many Google services, but a large number of third-party platforms such as Stack Exchange, Soundcloud, and any other app that allows Google Single Sign-In (SSO). This is usually denoted by a “Login with Google” button by the login and registration prompts. As long as the user has at least one Google account, no new account registrations are required when using a new product within the Google ecosystem. This simplifies the login process for users and can provide higher security by reducing the number of usernames and passwords to remember, and reducing the need for 3rd party identity management providers which are likely to be less tested and hardened. The caveat to federated identity is that a compromise of the federated identity Google account will allow the threat actor to now access every other website or platform with the ‘Login with Google” button, even if the legitimate user has never visited that site.

Do you see the problem? All data that a user generates while logged-in to federated platforms using Google’s SSO is linked to that one Google account. One search warrant or subpoena, examples of which are included in this article, will result in a web of accounts all linked to your identity through that one federated Google account.

By becoming the de facto federated identity, Google has further positioned itself as an authority in the digital realm, validated by their market dominance, scope, infrastructure, and relationships with governments. Google’s dominance and ubiquity place it at a central and trusted position, a single point of compromise.

What role does Google have in your own political activism?  We are only as strong as our weakest link.

Does that question make you cringe? It should. Gmail is the most commonly used email client and provider in the US, owning 53% of the market share. It’s safe to say most of us use Google in some way or another. At Civil Liberties Defense Center, if you value privacy from big brother or if you are engaged in political activism against government or corporations, we strongly encourage you to leave Google services to the degree that your threat model requires. As we see the ruthless state repression of activists, we must take collective responsibility for the safety of our comrades, and the efficacy of our movements. Even if you feel you have nothing to hide, does everyone that you organize with, or are in communication with, feel the same way? Furthermore, considering the long history of government repression in response to social movements, is it even true that you have nothing to hide? Your comrades may hold trust in you to not disclose your private communications. And the lawyers at CLDC have had to read enough of clients’ personal texts and emails in the discovery we receive in litigation to validate the threat to our simple right to privacy. If you don’t take precautions, such as using End-to-End Encryption (E2EE), you are, more likely than not, facilitating the storage of evidence that could be used to convict you or others if law enforcement decides to suck it all up.

Don’t take unnecessary risks for yourselves, your comrades, and your movements. Members of political movements cannot afford to be generous with their personal usage data, and it is socially irresponsible to do so. The small but significant step of moving movement-building communications off of Gmail and Google Docs will protect lives, livelihoods, community relations, our comrades, and of course, the movement you are building.

 

An actual email received by a CLDC client from Google

 

What’s your OPSEC?

All the best equipment, encryption, software, smarts, and skills can’t protect someone who provides law enforcement with evidence against themselves. There are actions you can take to protect yourself, and there are actions you can take that expose you. Operations security (OPSEC) refers to what an individual or organization does to protect its assets:

Systematic and proven process by which potential adversaries can be denied information about capabilities and intentions by identifying, controlling, and protecting generally unclassified evidence of the planning and execution of sensitive activities. The process involves five steps: identification of critical information, analysis of threats, analysis of vulnerabilities, assessment of risks, and application of appropriate countermeasures.

Source: National Institute of Standards and Technology (NIST). NIST Special Publication 800-53 Revision 5, Security and Privacy Controls for Information Systems and Organizations. [https://nvlpubs.nist.gov/nistpubs/SpecialPublications/NIST.SP.800-53r5.pdf]

OPSEC is the methodology by which an organization protects its sensitive information. Activists generally require more privacy than the average citizen to operate safely and effectively. It is essential that activists utilize an agreed-upon operations security that is designed, practiced, and tested against the organization’s threat model. Even organizations that are very public-facing will work with sensitive information. For example, when organizing a march, details like time, place, and route are sensitive information that could be utilized by counter-protestors or law enforcement. Techniques, tactics, capabilities, associations, names, and addresses are all examples of sensitive information. You may be thinking, “I’m an activist, not a secret agent! Aren’t you being paranoid?” You may have a point, depending on what you are doing and what information you need to protect. On the other hand, this is the same security mindset that provides law enforcement with such an abundance of evidence. Determining how secure and private an organization needs to be is calculated using a threat model.

 

What does your threat model look like?

Threat modeling is a requirement for effective OPSEC, and it should determine what the organization’s OPSEC will look like. This integral process identifies assets, threats and develops mitigations within the unique context of that organization.

The basic Threat Modeling questions:

  1. What do I have that’s worth protecting? (Assets)
  2. Who do I want to protect it from? (Actors)
  3. How likely is it that i will need to protect it? (Likelihood)
  4. How bad are the consequences if I fail? (impact)
  5. How much trouble am I willing to go through? (Effort)
Source: Cypurr Collective Session 10/20: Library Privacy Week Extravaganza! by the Cypurr Collective. [https://github.com/CyPurr-Collective/Cypurr-Prezes/blob/master/Handouts%3AZines/Handouts/Cypurr%20Quick%20Cybersecurity%20Tips.pdf]

Creating a threat model will help you balance convenience vs. security and privacy. This balance is important because too much convenience will create vulnerabilities, but too many security and privacy-motivated procedures will decrease productivity or become ignored by members of the organization..

 

Five steps of threat modeling using cybersecurity terms

Security culture

Security culture is a term with roots in anarchist and radical communities and organizations. It refers to the customs and OPSEC of communities whose members may be targeted by the government and big corporations. All activist organizations and affinity groups will generally require some form of security culture. The specific security culture of a community will reflect their threat model, as well as other qualities specific to their needs.

The central principle of security culture is that information must be kept on a need-to-know basis. As lawyers, we know that having information in your head about someone else’s criminal activity can potentially get you into hot water if you are subpoenaed to a grand jury, charged with conspiracy, or worse. If it’s not in your brain, you’ve got nothing to say to the state. People should be able to consent to what they learn and then carry in their heads, because it could be a vulnerability.

Examples of cultural norms within security culture include not providing information to or associating with law enforcement or informants, not calling the police (exceptions may apply of course), not asking questions about “illegal” or sensitive activities you are not directly involved with, and not boasting or over-sharing (including on social media). Security culture means staying aware of the risk of informants or undercover agents within the organization, and refraining from gossiping or snitch-jacketing. Gossiping is a technique long used by agitators to sow division and fracture movements. Don’t do their work for them.

Some readers may face minimal threats, but will still benefit from moving their activist and movement-building communications to Signal and ProtonMail (with precautions detailed in the Recommendations section below).

Someone with a higher-risk threat model may need to move all of their communications from the Google ecosystem, to mitigate further surveillance, possibly exporting email data and requesting Google to delete their personal data. Corroborating details from other Google services may provide damning evidence indirectly.

Big tech companies ranked by how many data points they collect on users.

Big tech companies ranked by how many data points they collect on users. From StockApps.

End-to-End Encryption (E2EE)

End-to-end encryption allows organizations to digitally protect information, so that it remains on a need-to-know status. It protects the confidentiality of communications from everyone other than the end users, keeping data encrypted both in transit and storage. If you are using end-to-end encryption and law enforcement seizes or intercepts your data, the only data they can get is encrypted ciphertext (assuming you don’t decrypt it for them). Below we recommend several security and privacy-focused services, many of which effectively use end-to-end encryption to secure your user data.

 

 

Recommendations

 

The last line of defense

You can limit what law enforcement can actually read if your phone is seized, when they copy the contents using a forensic extraction device.

And remember, the longest, most complex passphrase can only protect you as much as you are willing not to voluntarily disclose it. (Pro tip–don’t write your passwords on a piece of paper that the cops can find while raiding your house).

Do your own research and stay up-to-date. 

  • We cannot guarantee that a tool is 100% secure or private. Such a tool does not exist.
  • We cannot guarantee that the tools we recommend will always offer the same degree of protections that they do now (May 2024).

The gold standard

The ideal recommendations for hardened, private, and secure communications software fit certain criteria.

  1. Open-source. The source code of the software must be open for review to be properly audited for security and privacy.
  2. Owned and operated by an independent, not-for-profit organization.
  3. Protections in place to mitigate future compromise of the organization.
  4. End-to-end encrypted.
  5. Proven to be effective.
  6. Regularly audited by security testers and researchers.

Secure, private, anonymous browsing and communication:

 

The Onion Router (TOR) and Signal:

These technologies are currently the most trusted by CLDC because they largely meet the criteria above.
  • The Onion Router (Tor) should be used to anonymize traffic and protect identity.
    • For beginners, the Tor browser should be your entry point to the anonymization and privacy offered by The Onion Router (Tor). Educate yourself on how to avoid de-anonymizing yourself while using Tor, and remember that the Tor browser only protects HTTP (website) traffic.
    • For more advanced users with higher privacy requirements, use Whonix, a Linux-based operating system that runs like an app using virtual machines. Whonix uses a Tor gateway that runs on your machine and encrypts all of your networking/internet traffic, not only website traffic. Discord, online gaming, and music servers are all examples of web applications that do not use HTTP and are protected by Whonix, not by the Tor browser.

Whonix is a free and open-source desktop operating system (OS) that is specifically designed for advanced security and privacy. It’s based on the Tor anonymity network, security-focused Linux Distribution Kicksecure , GNU/Linux and the principle of security by isolation. Whonix defeats common attacks while maintaining usability. [https://whonix.org/]

  • Use Signal (also free) for text, as well as audio and video calls.
    • Secure, private messaging and voice/video chat application with end-to-end encryption. Includes group call capability.
    • An alternative to texting, calling, Facebook messenger, Instagram DMs, Facetime, Telegram, WhatsApp, Google Hangouts, some Zoom meetings, and other messaging applications.

IMPORTANT! The tools below do not meet the criteria above to the same degree as Signal and TOR. We cannot guarantee privacy and security to the same degree, but still recommend their use. Be sure to keep your software up to date. Keep an eye out for security and privacy incidents or news relating to any critical tools that you rely on.

Email

  • ProtonMail is a secure, private, email client.
    • Uses end-to-end encryption.
    • Onion link is provided for anonymous, encrypted traffic to ProtonMail servers.
      • IMPORTANT! Use the Tor browser when accessing ProtonMail. 
      • NEVER LOG IN OR REGISTER WITHOUT USING TOR OR A TRUSTED VPN.
    • No recovery or secondary email address is required for registration.
      • IMPORTANT! DO NOT LIST SECONDARY OR RECOVERY CONTACT INFO. This will de-anonymize your ProtonMail account.
    • Supported by premium memberships. While ProtonMail is run by a for-profit company, ProtonMail does not need to sell your private data because revenue comes from premium members.
    • Alternative forms of payment are supported, such as cryptocurrency.

WARNING! ProtonMail has released IP address information connected to a French activist who used their services, leading to her arrest. Due to properly configured end-to-end encryption, message contents were not readable, but the email service logged the IP address being used to access her email.  This could have been prevented by using Tor to hide her IP address while accessing ProtonMail.

Collaboration

  • CryptPad is a secure, private, collaboration tool:
    • Use as an alternative to Google Docs.
    • End-to-end encryption.
    • Offers access control and private documents.
  • Keybase is a platform for private, secure messaging and file-sharing.
    • End-to-end encryption using PGP.
    • WARNING! A public profile is a feature that can de-anonymize you if you enter any informationi into it. Leave your profile information blank.

Search engines

Use these search engines that won’t store your IP address and other private information:

Privacy browsers

The browser landscape has expanded significantly since Internet Explorer and Netscape. Mired with ways to surveil its users, surfing the web- using browsers- is commonly the doorway that personal data escapes through. Techniques like cookies and browser fingerprinting are used to track and generate data on you. To combat this, there has been a trend of browsers that are privacy-focused, called privacy browsers. With regards to privacy, Google Chrome and Microsoft Edge are the worst choices, despite being so popular. Chrome is Google’s flagship browser, and is completely integrated into their ecosystem to obtain maximum data from users. Signing into a Google account has been a requirement in the past to run the browser, and could be renewed in the future.

When using private browsers, remember that no matter how secure and private your communications are with a website/service, if you log in, the website will know that, and will usually save that information. If your account becomes linked to your real-world identity, the website can provide evidence to law enforcement that will be used to show you logged in. Therefore, the information that you provide in the creation of your account is extremely important. For example, if you weren’t protecting your privacy or IP address during account creation, then your IP can be forever linked to your account. If you did not, or from some reason cannot, use proper precautions when creating an account, do not use it for higher-risk activities, keep it isolated from your other accounts, and, importantly, use a new or designated email account if a secondary email address is required. This method of compartmentalizing email addresses and other accounts should be used widely. If law enforcement is able to link your accounts, it is used as part of your digital fingerprint.

Whichever browser you use, review the settings to ensure the privacy features you are expecting are actually enabled. If you are curious about what data is being leaked by your current browser, try these free resources that run privacy tests on your browser:

 

Sources

Resources for the deep dive:

​​