If you’re anything like me, you’re sitting reading this entirely unconcerned about “how” we are going to achieve AI in the future.

I don’t mean the development of models to give us the best answers. Or the best code assistants. I’m certainly not referring to Simulated Conscious AI (SCAI), Artificial General Intelligence (AGI) or even Superintelligence.

I’m referring to something so fundamental that we just take it for granted as just being there.

Compute.

Or as is the critical assumption as we barrel forward in our AI future: infinitely scalable compute.

I’ve written pieces about the huge draw AI has on compute. How the billions of venture funding are funding not the startups, but their CPU/GPU/TPU/IPU cycles. How a company like OpenAI can be growing at breakneck pace, but be running painfully in the red, and claiming to need trillions of dollars of compute.

And I think we have a problem! Yesterday’s compute needs help to address our needs for tomorrow. There’s an explosion of data center buildout. An insane quest to power and cool these new resources. An acceleration of quantum compute for AI and ML.

But still, I believe we need more.

Last week, Google published their ongoing quest to drive greater efficiency with their compute - to get more juice from each squeeze, so to speak: 30x–40x more efficient. Sounds amazing (and kudos). In there was a nugget of inspiration - and I’d like to share that and develop it with you.

A Gemini prompt produces 0.03 gCO2e (grams of carbon dioxide equivalent) - source in comments for a great in-depth read.

And so I wondered - could we drive a more sustainable future for AI growth by leaning on environmental sustainability?

Let’s take a brief journey to understand the state of carbon credits, how it factors into the business landscape and decision making today, and architect a system that we believe could drive behavior change to create a more sustainable path of AI growth.

Basic Background on Carbon

What are carbon credits?

Simply put, they’re a market-based mechanism where one credit represents the right to emit one metric ton of carbon dioxide equivalent. Think of it as environmental currency - you can earn credits by removing or preventing CO2 emissions, and spend them when you produce emissions.

Personal carbon footprints vary dramatically, but the average American generates about 16 tons of CO2 per year - that’s roughly 44 kilograms per day, or 44,000 grams. To put this in perspective:

  • A single Gemini prompt produces just 0.03 grams
  • Watching your favorite episode of Stranger Things costs about 36 grams of carbon
  • An average commute (20 miles roundtrip by car) generates about 8,800 grams of CO2

So while that Gemini prompt may seem like a minuscule amount of carbon utilization, you should remember that (i) you’re already producing literal tons of carbon before you even added AI into the mix and (ii) the little things really add up at scale!

Carbon credits drive real behavior change. When Microsoft pledged to be carbon negative by 2030, they didn’t just buy offsets - they restructured their entire cloud infrastructure, changed vendor relationships, and built carbon costs into every business decision. The market mechanism created accountability that good intentions alone couldn’t achieve.

Carbon Awareness for Businesses

Corporate climate commitments have reached critical mass. Over 1,500 companies have made net-zero pledges through the UN’s Race to Zero campaign, representing roughly 25% of global CO2 emissions.

Carbon accounting follows established frameworks. Companies track both direct emissions:

  • Scope 1: from owned operations and indirect emissions
  • Scope 2: from purchased energy
  • Scope 3: from supply chains and product usage

The complexity of this type of accounting can be staggering - a single iPhone’s carbon footprint involves hundreds of suppliers across dozens of countries.

AI creates a massive Scope 3 problem

Imagine you’re a company that’s made a net-zero pledge by 2030. Your employees start using AI tools heavily - ChatGPT for research, Midjourney for marketing materials, GitHub Copilot for development. Suddenly, your Scope 3 emissions are skyrocketing because of all that compute that’s happening in someone else’s data centers.

What do you do?

You can’t exactly tell your marketing team to stop being creative or your developers to stop being productive.

The reality is that many companies will find their AI usage directly conflicts with their climate commitments.

You can’t just “make AI greener” without fundamentally changing how you use it.

Applying Carbon Doctrine to AI Usage

What if we all had a daily carbon budget? You, me, the dog (AI pet collars are on the horizon) and business entities. Let’s consider it like your bank account.

Start each day with, say, 100 carbon units. Spend a unit on your commute to work, and you’re down to 99 units. Don’t spend your credits today - you can bank them for tomorrow. Produce solar energy? Get some bonus credits! But go into the red, and like overdraft, you’re still responsible for paying it back - perhaps tomorrow you don’t get to use your carbon units. Or reach into that bank account and pay up!

But let’s limit this to AI usage only.

  1. Deterministic - the AI agent or AI solution is doing something specific
  2. Measurable - compute cycles are being metered already
  3. Calculable - we can factor in the carbon cost of a compute cycle - and even better, we can account for the differential of more efficient compute!

Here’s how it might work:

  • A basic Gemini prompt: 0.1 carbon units
  • That AI image generation of a cat in a space suit (that you absolutely must have for your LinkedIn post… guilty as charged): 3 carbon units. Upscale it? Add 2 carbon units. Need to animate it? Another 10 carbon units
  • ChatGPT research report on the War of 1812: 12 carbon units. But only 6 carbon units if you’re willing to use more efficient compute that means you’ll have it end of day, instead of right away
  • Claude Code helping to solve a simple bug fix: 3 carbon units

It’s a simple system, right?

  • Every AI action quoted with a carbon impact metric - choose to proceed, or not!
  • Every AI action counts against your carbon quota
  • Go into the red - you have to catch up
  • Too far into the red - then pay up or get shut down until you’re back at a surplus to spend

I’d like to say it’s earth-shattering - but it’s not.

The principle is simple - give people transparency. Give them concrete data. Make them aware. Hold them accountable. And then watch the behavior change!

Watch the student decide that they’re actually going to do some of their research project themselves, and only rely on AI for supplementing!

Watch the engineer leverage their code assist AI for solving the hard problems, and troubleshoot why that chart isn’t drawing correctly on their own!

Ok, ok, I guess I can actually plan my own vacation itinerary and give it some thought as opposed to asking an LLM to do it for me.

Watch as we consider more intelligently when we choose to use our AI and when we choose to instead use our organic computers (I’m talking about our own brains here) - because we’ve made the cost of AI clear, understandable, important, and consequential.

I believe this simple, inspired approach could yield a whole new level of AI sustainability.

The Implications

Of course, such a plan would have implications, and some of the more obvious ones are likely to be the areas of massive debate.

From AI All the Time, to Less So

AI is pervading so many facets of life and solutions because, frankly, there’s a LOT of things it can do in superior or enhanced fashion. While the proposal here stems from the notion that there is a cost, and it is unsustainable in the long run, the proposal invariably leads to a decrease in AI usage from the baseline.

While this is by design, it’s easy to argue that this itself is a suboptimal outcome, and thus not one likely to be adopted.

From AI for All, to AI for the Rich

While on the face of it, all would be equally metered, there’s an obvious notion here around normal usage. We have to be able to go into the red (an overdraft of carbon credits). And naturally, it would only make sense that we can buy ourselves out of the red. This, of course, leads to the concept of an unfair advantage for the wealthy (businesses). Those with means can spend and spend, and use AI as abundantly as needed, while the rest of us must hit pause at some point.

If AI is the force multiplier we believe it to be, then the ability to spend without bounds on AI, while the rest of us are effectively throttled, creates a massive output gap.

What Needs to Be True for This to Come to Pass

Like with all great ideas and strategies, there are some critical requirements that must happen - and I’m of the opinion that two exist here, which I’ll call (i) Unless, and (ii) Adoption.

Unless

“Unless someone like you cares a whole awful lot, nothing is going to get better. It’s not.” - Dr. Seuss’ The Lorax

That, of course, is the crux of the whole thing - that we need to believe that something different must be done to enable our AI future. That the tools we have today, and those that we would naturally evolve, are insufficient.

We need to believe that our behaviors need to change to make AI sustainable.

And considering that 98% of people have never really given thought to the question “Is AI sustainable?” that’s a pretty tenuous assumption (FYI, I didn’t need an LLM to make up that 98% stat for me - I did it all on my own in the interest of sustainability).

Adoption

We would need to see mass adoption across industries.

From OpenAI, Anthropic, Google, Meta (and on and on) adopting, to major cloud providers - all knowing that it could lead to a decline in use of their solutions (or at least a growth curve with a shallower slope).

We would need centralized coordination and standards bodies. We would need management consultants advising how to implement. We would need new coordination and frameworks.

And to be sure, all the pieces we need exist in one form or another. But this kind of coordinated thinking to hit critical mass is hard, to say the least.

Will It Work? Could It Happen?

I hope.

I wish.

I really don’t know.

I even debate if I want it to.

As a proponent of AI, I want to use it, truly without bounds. I leverage it… well, A LOT… like a lot a lot. And I think you should too.

I’d like to believe in the infinite scalability of compute - but I don’t.

But perhaps it’s going to be like in The Lorax - the conditions have to get incredibly bad for us to feel the pain of forging a different path forward.

Having read this far, I’d ask you to take away the concept: Unless.

My dear reader…

  • Unless someone like you debates your table mate at your next dinner party about whether we can use AI without bounds
  • Unless at your next cocktail hour you ask the person you saunter up next to why they had AI write that email reply
  • Unless we’re willing to ask ourselves what would it take for us to behave differently
  • Unless each of us is willing to consider a different way of using AI tomorrow than yesterday, nothing is going to change… it’s not.

So what’s your take? Would carbon quotas for AI usage change your behavior, or just create another system to game?

If this sparked intrigue or expanded your horizons, let me know what you think.