AI Has a Problem. So How Do We Use It While Causing Less Harm?
A Beginner’s Guide to Reducing Your AI Footprint and Minimizing Harm
You’ve probably heard that AI is bad in some way. Maybe you heard that it had something to do with wasting water or leaving people unemployed. Still, you find yourself surrounded by it in a seemingly inescapable barrage of add-ons be it in your phone camera, your inbox, and recommendations from your friends, your engineers, your kids. Similar to the internet in the 90s, AI is here and it intends to stay. In this case: how do we live with it and do the least harm to people, to the planet, and to our own communities?
It’s not all bad.
In fact, AI is proving to have significant utility. In healthcare, for example, there is now an FDA-cleared system that can detect diabetic retinopathy from eye photos, and in research there are tools such as AlphaFold 3 that help predict molecular structures used in drug discovery. In education, AI supports tutoring, accessibility like captioning and translation, and grading assistance. Despite these promising signs, there are issues.
What kind of problems is AI causing anyway? (And why should you care?)
1) Environmental
Electricity (the power bill): Training big models and answering millions of prompts adds heavy demand at data centers. That extra load can strain local electric grids and raise local electric bills (see the IEA on AI’s growing electricity needs and siting debates like xAI’s proposed data center in Memphis).
Water: Many data centers use evaporative cooling, which can consume large amounts of clean drinking or usable water. Researchers estimate AI-linked global water withdrawals could reach about 1.1 to 1.7 trillion gallons in 2027. In dry or fast-growing regions, large data-center users have already triggered oversight and disclosure fights, for example the public records case in The Dalles, Oregon, and siting controversies like xAI’s gas-backed data center in Memphis.
2) Social and Economic
Invisible labor: AI depends on people to label data and moderate harmful content, often in lower-income countries, with low pay and mental-health risks from repeated exposure to violent or sexual-abuse content without counseling and with weak labour protections.
Inequity & Bias: Skewed training data can lead to unfair decisions in hiring, loans/credit, housing, national elections and many other fields with the most negatively affected being women and people of color.
Misinformation & fraud:Deepfakes and voice cloning done through AI are making scams like phone and text scams more convincing.
Policy surrounding AI advancement is still catching up. The EU AI Act will roll out in stages over the years and set clearer rules in Europe, including transparency for some AI-made content, while in the United States there are still no comprehensive AI laws.
The situation we find ourselves in is this: there is rapidly advancing technology that is having significant impacts on our environment and communities. The growth of this technology is highly incentivized through profit and promises of utility, and yet there aren’t strong, universal guardrails to make sure its growth is checked. So, what can be done?
How to think about your interactions with AI: a simple harm ladder
First, who holds most responsibility?
Simply, the AI companies and governing bodies. They choose and facilitate where AI runs, how it is cooled, and what power it uses, and they set the defaults you see. The biggest duty is on providers to move toward 24/7 carbon-free energy, publish water-use metrics like WUE, and avoid potable-water evaporative cooling in dry basins, and for government policy to require higher levels of accountability and commitment to cleaner practices from these companies. However, without strong incentives on both sets of actors, voluntary accountability is unlikely.
Can I make a difference?
AI is still in a fairly malleable state. There are ways to pressure the tides of AI to move in favor of cleaner/less damaging practices, but they are in part reliant on how we interact with AI today. While this effort to shape AI in a more mindful direction can take many forms, this guide proposes an alternative to this end: making sure that the ways we are engaging with AI lower our individual AI footprints and decrease the harm being caused by it.
How you use AI; The harm ladder
What happens when you say “Hey Alexa,” click an AI summary, or prompt a GPT model? Your device sends a request to big computers in a data center. Those computers use electricity, and often water for cooling, to run the AI model and send the answer back. Resource use varies by setup, but there are certain tasks that are a lighter system load/require less processing power than others. This harm ladder orders the lightest to heaviest of common processing jobs:
No AI used - lightest
On-device features where no internet is used - very light
Text on “lite” AI models like Claude Haiku or Google Gemini Nano - light (shorter text is lighter than longer text)
Text on big models like GPT-5, Claude 3.5 Sonnet or Claude Opus, Gemini 1.5 Pro - medium (shorter text is lighter than longer text)
Audio jobs - heavy
Image generation - heavier
Video generation - heaviest
The framework
Opt Out → Right-fit use → Pick better providers → Protect your data → Provenance
Opt Out. When you really do not need AI, skip it. Yes, you will still likely bump into AI elsewhere, but an intentional “no” still helps. What to do instead? Use classic search on Google instead of AI summaries; turn off always-listening assistants on your devices: iPhone: turn off Siri, Android: turn off Google Assistant, Alexa: mute microphones. Keep it simple: if AI isn’t needed, don’t use it.
Right-fit use. If you choose AI, keep it light. Small jobs on small models use less computer power. Short answers are lighter than long ones (refer to the harm ladder above).
Pick better providers. Some companies are more serious about harm reduction than others. Look for three things:
Clean power, hour by hour. A good sign is a 24/7 carbon-free plan.
Water transparency. Do they explain water use or give a Water Usage Effectiveness (WUE) number?
Siting and cooling. Are they avoiding water-hungry evaporative cooling in dry places?
How to find this in 60 seconds: Search the company name plus “sustainability report,” “water usage effectiveness,” or “data center locations news.”
Protect your data: AI models require data to continue to grow and remain relevant. Your data is part of the data that is used to this end. The conversation about data privacy can feel complicated and not that serious if you are not a data- and privacy-concerned person. It is still important to think about, and it does not have to be complicated, especially when talking about AI. Put simply, giving the internet and AI your data creates a risk of it appearing elsewhere online and in front of people who might be the wrong audience. Researchers have shown that large models can sometimes memorize and reveal snippets of their training data when prompted in specific ways.
What is the advice here then? It is twofold:
Do not input any sensitive data like personal ID information, bank or credit-card numbers, tax returns, invoices, payroll, diagnoses, meds, genetic data, work secrets like unreleased code or features, contracts, pricing, client lists, anything under NDA. The rule of thumb is if this data appearing elsewhere could cause harm if accessed by a bad actor, do not feed it into an AI system.
You can opt out of your data being used for training. You can tell companies to not store and to not use your data, and it is actually pretty easy. Most platforms, under the Settings tab, have a permissions section you can toggle on or off related to the company storing and using your data for training. It is as easy as turning this feature off sometimes. For example, in ChatGPT, you can go to Settings → Data controls → Improve the model for everyone = Off. This stops new chats from being used to train models.
Provenance: Be more transparent about your use of AI. When we say clearly where AI was used, we make scams harder, protect demand for human-made work, and push the market toward honesty. It can be as easy as a short note somewhere saying AI helped: “AI-assisted, outlines only,” or “Image created with AI.”
To the AI companies
AI can be less harmful than it is today. You run the systems. Doing better helps everyone. Move toward 24/7 carbon-free energy. Publish water metrics like WUE and cooling methods by region. Avoid potable-water evaporative cooling in dry basins. Publish model cards and known limits. Honor easy opt-outs from training on user content. Pay and protect data labelers and moderators fairly. Label AI-generated media with content credentials.
A Closing Note
AI is still malleable. We do not have to be passive. How we spend money and time matters. Choosing cleaner tools and settings creates a market for cleaner AI. Ask providers for better practices. Change your defaults. Label your AI use. Add your voice to the conversation of AI and be part of shaping it.


