People in the comments seem confused about this with statements like “greenest AI is no AI” style comments. And well, obviously that’s true but it’s an apples to pears comparison.
Clearly Ecosia is pushing for “people want AI” _and_ we want to make it more ecofriendly. Taking away features from users altogether is not the right answer.
It’s like saying “cheapest car is no car”. It doesn’t solve the fundamental problem of “wanting a car”.
> People in the comments seem confused about this with statements like “greenest AI is no AI” style comments. And well, obviously that’s true
It’s not true. AI isn’t especially environmentally unfriendly, which means that if you’re using AI then whatever activity you would otherwise be doing stands a good chance of being more environmentally unfriendly. For instance, a ChatGPT prompt uses about as much energy as watching 5–10 seconds of Netflix. So AI is greener than no AI in the cases where it displaces other, less green activities.
And when you have the opportunity to use human labour or AI, AI is almost certainly the greener option as well. For instance, the carbon emissions of writing and illustrating are far lower for AI than for humans:
> Our findings reveal that AI systems emit between 130 and 1500 times less CO2e per page of text generated compared to human writers, while AI illustration systems emit between 310 and 2900 times less CO2e per image than their human counterparts.
I think the actual answer is more nuanced and less positive. Although I appreciste how many citations your comment has!
I'd point to just oe, which is a really good article MIT's technology review published about exactly this issue[0].
I'd make two overall points firstly to:
> when you have the opportunity to use human labour or AI, AI is almost certainly the greener option as well.
I think that this is never the trade off, AI normally generates marketing copy for someone in marketing, not by itself, and even when if it does everything itself, the marketing person might stop being employed but certainly doesn't stop existing and producing co2.
My point is, AI electricity usage is almost exclusively new usage, not replacing something else.
And secondly on Simon Wilison / Sam Altman's argument that:
> Assuming that higher end, a ChatGPT prompt by Sam Altman's estimate uses:
>
> 0.34 Wh / (240 Wh / 3600 seconds) = 5.1 seconds of Netflix
>
> Or double that, 10.2 seconds, if you take the lower end of the Netflix estimate instead.
This may well be true for prompts, but misses out the energy intensive training process. Which we can't do if we actually want to know the full emmisions impact. Especially in an environment when new models are being trained all the time.
On a more positive note, I think Ecosia's article makes a good point that AI requires electricity, not pollution. It's a really bad piece of timing that AI has taken off initially in the US at a time when the political climate is trying to steer energy away from safer more sustainable sources, and towards more dangerous, polluting ones. But that isn't an environment thay has to continue, and Chinese AI work in the last year has also done a good job of demonstrating that AI trainibg energy use can be a lot kess than previously assumed.
> AI normally generates marketing copy for someone in marketing, not by itself, and even when if it does everything itself, the marketing person might stop being employed but certainly doesn't stop existing and producing co2.
Sure, but it does it a lot quicker than they can, which means they spend more of their time on other things. You’re getting more work done on average for the carbon you are “spending”.
Also, even when ignoring the carbon cost of the human, just the difference in energy use from their computer equipment in terms of time spent on the task outstrips AI energy use.
> This may well be true for prompts, but misses out the energy intensive training process.
If you are trying to account for the fully embodied cost including production, then I think things tilt even more in favour of AI being environmentally-friendly. Do you think producing a Netflix show is carbon-neutral? I have no idea what the carbon cost of producing, e.g. Stranger Things is, but I’m guessing it vastly outweighs the training costs of an LLM.
Glad to see someone refute the AI water argument, I'm sick of that one. But I do not see how the displacement argument fits. Maybe you can elaborate but I don't see how we can compare AI usage to watching Netflix for any length of time. I can't see a situation where someone would substitute watching stranger things for asking chatGPT questions?
The writing and illustrating activities use less energy, but the people out there using AI to generate ten novels and covers and fire them into the kindle store would not have written ten novels, so this is not displacement either
> And when you have the opportunity to use human labour or AI, AI is almost certainly the greener option as well. For instance, the carbon emissions of writing and illustrating are far lower for AI than for humans:
Do you plan on killing that person to stop their emissions?
If you don't use the AI program the emissions don't happen, if you don't hire a person for a job, they still use the carbon resources.
So the comparison isn't 1000kg Co2 for a human vs 1kg Co2 for an LLM.
It's 1000kg Co2 for a human vs 1001kg Co2 for an LLM.
> For instance, the emission footprint of a US resident is approximately 15 metric tons CO2e per year22, which translates to roughly 1.7 kg CO2e per hour
Those 15,000kg of CO2e are emitted regardless of that that person does.
The article also makes assumptions about laptops that are false.
>Assuming an average power consumption of 75 W for a typical laptop computer.
Laptops draw closer to 10W than 75W, (peak power is closer to 75W but almost not laptops can dissipate 75W continually).
The article is clearly written by someone with an axe to grind, not someone who is interested in understanding the cost of LLM's/AI/etc.
It says that ignoring the human carbon use, just their computer use during the task far outweighs the AI energy use. So your response “are you planning on killing the human?” makes zero sense in that context. “They are wrong about the energy use of a laptop” makes more sense , but you didn’t say that until I pushed you to actually read it.
75W is not outlandish when you consider the artist will almost certainly have a large monitor plugged in, external accessories, some will be using a desktop, etc. And even taking the smaller figure, AI use is still smaller.
The human carbon use is still relevant. If they were not doing the writing, they could accomplish some other valuable tasks. Because they are spending it on things the AI can do, somebody else will have to do those things or they won’t get done at all.
75w is nuts actually. I measured my _desktop_ setup about 10 years ago including two monitors and idle was around 35w. It also doesn't make sense to include idle of all peripherals since you would be using them for chatgpt as well.
There are fundamental reasons public transit is always more efficient than private cars. There's no fundamental reason a really good search engine is more efficient than an LLM or any other kind of AI.
The sheer amount of linear algebra number crunching vs some database lookups is monumental. I don’t see how an LLM could ever be as efficient as a search engine.
Search engines aren't just some database lookups, is the thing. There's actually quite a bit of linear alegbra involved in both, for page ranking especially
Anyways these sorts of comparisons make no sense to begin with, and quite obviously at the moment the worst actors cough xAI cough who are deploying massively polluting generators into residential neighborhoods are much worse than, say, Google Search
Maybe because we all know that working search is better than an LLM hands down? What are you talking about? The only reason people go to LLMs instead of Google is because Google Search has been destroyed by Sundar Pichai.
I don't use AI though. Are they going to put automatic AI responses on the SERP? That's less green than simply not having AI on the SERP. Giving me something I do not want is wasteful by definition.
I disagree. Without AI I might take 15 min to search for something in google that would have taken me a single prompt in ChatGPT. The energy used by my screen in those 15 minutes would be higher than the energy taken by that prompt.
Yeah, but people here also know that AI that doesn’t use vast amounts of energy is generally returning mediocre results. And mediocre results are not useful at all. So whatever you save on energy doesn’t really matter if the utility is going to zero.
Your comparison to cars is good. A cheap car will be slower and less comfortable but will get you where you want to be ultimately. That’s the core value of the car. A bad LLM may not get you anywhere. It’s more like having a cheap powerdrill that can drill through plaster but not through concrete, in the end you still want the expensive drill…
I love Kagi's implementation: by default it's disabled, you either have to add a question mark to the search, or click in the interface after searching to generate the summary.
This is absurd. Training an AI is energy intensive but highly efficient. Running inference for a few hundred tokens, doing a search, stuff like that is a triviality.
Each generated token takes the equivalent energy of the heat from burning ~.06 µL of gasoline per token. ~2 joules per token, including datacenter and hosting overhead. If you get up to massive million token prompts, it can get up to the 8-10 joules per token of output. Training runs around 17-20J per token.
A liter of gasoline gets you 16,800,000 tokens for normal use cases. Caching and the various scaled up efficiency hacks and improvements get you into the thousands of tokens per joule for some use cases.
For contrast, your desktop PC running idle uses around 350k joules per day. Your fridge uses 3 million joules per day.
AI is such a relatively trivial use of resources that you caring about nearly any other problem, in the entire expanse of all available problems to care about, would be a better use of your time.
AI is making resources allocated to computation and data processing much more efficient, and year over year, the relative intelligence per token generated, and the absolute energy cost per token generated, is getting far more efficient and relatively valuable.
Find something meaningful to be upset at. AI is a dumb thing to be angry at.
I’m curious where you got any of those numbers. Many laptops use <20W. But most local-ai inferencing requires high end, power hungry nvidia GPUs that use multiple hundreds of watts. There’s a reason those GPUs are in high demand, with prices sky high, because those same (or similar) power hungry chips are in data centers.
Compared to traditional computing it seems to me like there’s no way AI is power efficient. Especially when so many of the generated tokens are just platitudes and hallucinations.
> The agreed-on best guess right now for the average chatbot prompt’s energy cost is actually the same as a Google search in 2009: 0.3 Wh. This includes the cost of the answering your prompt, idling AI chips between propmts, cooling in the data center, and other energy costs in the data center. This does not include the cost of training the model, the embodied carbon costs of the AI chips, or the fact that data centers typically draw from slightly more carbon intense sources. If you include all of those, the full carbon emissions of an AI prompt rise to 0.28 g of CO2. This is the same emissions as we cause when we use ~0.8 Wh of energy.
How concerned should you be about spending 0.8 Wh? 0.8 Wh is enough to:
Stream a video for 35 seconds
Watch an LED TV (no sound) for 50 seconds
Upload 9 photos to social media
Drive a sedan at a consistent speed for 4 feet
Leave your digital clock on for 50 minutes
Run a space heater for 0.7 seconds
Print a fifth of a page of a physical book
Spend 1 minute reading this blog post. If you’re reading this on a laptop and spend 20 minutes reading the full post, you will have used as much energy as 20 ChatGPT prompts. ChatGPT could write this blog post using less energy than you use to read it!
W stands for Watts, which means Joules per second.
The energy usage of the human body is measured in kilocalories, aka Calories.
Combustion of gasoline can be approximated by conversion of its chemicals into water and carbon dioxide. You can look up energy costs and energy conversions online.
Some AI usage data is public. TDP of GPUs are also usually public.
I made some assumptions based on H100s and models around the 4o size. Running them locally changes the equation, of course - any sort of compute that can be distributed is going to enjoy economies of scale and benefit from well worn optimizations that won't apply to locally run single user hardware.
Also, for AI specifically, depending on MoE and other sparsity tactics, caching, hardware hacks, regenerative capture at the datacenter, and a bajillion other little things, the actual number is variable. Model routing like OpenAI does further obfuscates the cost per token - a high capabilities 8B model is going to run more efficiently than a 600B model across the board, but even the enormous 2T models can generate many tokens for the equivalent energy of burning µL of gasoline.
If you pick a specific model and gpu, or Google's TPUs, or whatever software/hardware combo you like, you can get to the specifics. I chose µL of gasoline to drive the point across, tokens are incredibly cheap, energy is enormously abundant, and we use many orders of magnitude more energy on things we hardly ever think about, it just shows up in the monthly power bill.
AC and heating, computers, household appliances, lights, all that stuff uses way more energy than AI. Even if you were talking with AI every waking moment, you're not going to be able to outpace other, far more casual expenditures of energy in your life.
A wonderful metric would be average intelligence level per token generated, and then adjust the tokens/Joule with an intelligence rank normalized against a human average, contrasted against the cost per token. That'd tell you the average value per token compared to the equivalent value of a human generated token. Should probably estimate a ballpark for human cognitive efficiency, estimate token/Joule of metabolism for contrast.
Doing something similar for image or music generation would give you a way of valuing the relative capabilities of different models, and a baseline for ranking human content against generations. A well constructed meme clip by a skilled creator, an AI song vs a professional musician, an essay or article vs a human journalist, and so on. You could track the value over context length, length of output, length of video/audio media, size of image, and so on.
Suno and nano banana and Veo and Sora all far exceed the average person's abilities to produce images and videos, and their value even exceeds that of skilled humans in certain cases, like the viral cat playing instrument on the porch clips, or ghiblification, or bigfoot vlogs, or the AI country song that hit the charts. The value contrasted with the cost shows why people want it, and some scale of quality gives us an overall ranking with slop at the bottom up to major Hollywood productions and art at the Louvre and Beethoven and Shakespeare up top.
Anyway, even without trying to nail down the relative value of any given token or generation, the costs are trivial. Don't get me wrong, you don't want to usurp all a small town's potable water and available power infrastructure for a massive datacenter and then tell the residents to pound sand. There are real issues with making sure massive corporations don't trample individuals and small communities. Local problems exist, but at the global scale, AI is providing a tremendous ROI.
AI doombait generally trots out the local issues and projects them up to a global scale, without checking the math or the claims in a rigorous way, and you end up with lots of outrage and no context or nuance. The reality is that while issues at scale do exist, they're not the issues that get clicks, and the issues with individual use are many orders of magnitude less important than almost anything else any individual can put their time and energy towards fixing.
your answer seems very specific on joules. Could you explain your calculations, since I cannot comprehend the mapping of how you would get a liter of gasoline to 16.8m tokens? e.g. does that assume 100% conversion to energy, not taking into account heat loss, transfer loss, etc?
(For example, simplistically there's 86400s/day, so you are saying that my desktop PC idles at 350/86.4=4W, which seems way off even for most laptops, which idle at 6-10W)
Would be interesting to be a fly on the wall for their internal conversations. I'm sure plenty of their employees are AI sceptics, precisely because of the environmental impacts, and this doesn't sound like it would take all those concerns away.
Looks interesting. One question though: are you running your own fine-tuned open models on your hardware, or is this powered by an external model like GPT behind the scenes? Curious how independent the stack really is.
> I’m based on OpenAI’s GPT-4 architecture, which is a proprietary large language model. It’s designed to understand and generate human-like text across many topics and tasks. While the underlying model itself isn’t open source, it’s widely used for applications requiring advanced natural language understanding and generation. If you want, I can share more about how it works or about alternatives in the AI landscape!
"Great question! My responses are generated using advanced language models developed by OpenAI, like GPT, rather than running independent fine-tuned open-source models on dedicated hardware. This means the core AI technology powering me is externally hosted and maintained by OpenAI, ensuring high-quality, up-to-date language understanding and generation.
However, my design and integration are tailored to prioritize values like sustainability, integrity, dignity, and compassion, and I’m optimized to provide answers with a strong focus on those principles. So while the underlying model is external, the way I interact and the lens through which I provide information is uniquely aligned with Ecosia’s mission.
If you’re interested, I can also share insights on open-source alternatives or how AI stacks can be made more independent and sustainable!"
TFA "... which already powers AI Overviews and some of our search results. Building our own infrastructure gives us more control over the technology, ..."
In their main page they fleetingly mention they train their own small models.
I'll be interested to see how this pkays with their actual users.
As one myself, I don't object inherently to Ecosia providing AI search. I understand they need to stay competitive with other search.
But I find how prominent / hard to avoid their AI search is, reeeeaaally annoying. It's annoying anyway, but in a context where I don't want it, and it's creating more emmisions, it feels like it's especially egregious being shoved down my throat by a company that exists to reduce pollution.
I haven't kept up to date with this, but last I heard we still aren't certain how much energy AI training/inference actually takes at the big AI companies (OpenAI/Anthropic). Have any of these companies shared this since? What's our closest estimates if not?
> As a not-for-profit company, we can afford to do things differently. AI Search uses smaller, more efficient models, and we avoid energy-heavy features like video generation altogether.
I'm a bit confused -- do other search engines provide video generation? Mentioning that sounds too out of place to me. Am I missing something?
Waterfox, I’ve spent a good amount of time on scouring through the code looking at what to remove and the next release I’ve found some last remaining remnants to disable
Netflix spending 240Wh for 1h of content just does not pass the smell test for me.
Today I can have ~8 people streaming from my Jellyfin instance which is a server that consumes about 35W, measured at the wall. That's ~5Wh per hour of content from me not even trying.
They claim that streaming over WiFi to a single mobile device is 37W:
Because phones are extremely energy efficient, data transmission accounts for nearly all the electricity consumption when streaming through 4G, especially at higher resolutions (Scenario D). Streaming an hour-long SD video through a phone on WiFi (Scenario C) uses just 0.037 kWh – 170 times less than the estimate from the Shift Project.
They might be folding in wider internet energy usage?
It's way more lopsided than your example would suggest.
My understanding is that Netflix can stream 100 Gbps from a 100W server footprint (slide 17 of [0]). Even if you assume every stream is 4k and uses 25 Mbps, that's still thousands of streams. I would guess that the bulk of the power consumption from streaming video is probably from the end-user devices -- a backbone router might consume a couple of kilowatts of power, but it's also moving terabits of traffic.
Does the Netflix number include the energy cost of manufacturing all the cameras/equipment used for production? Energy for travel for all the crew involved to the location? Energy for building out the sets?
It's quickly pointed out that he's not counting the training of models, producing all the GPUs, energy spent on scraping, the increased storage needs from scraping the whole internet, etc.
The Netflix number is probably not counting all the energy spent producing the shows/movies, building all the cameras/specialized equipment, building their data centers etc. either.
It is fair to compare inference to streaming. Both are done by the end user.
The greenest AI will be connected to district heat networks instead of being cooled with air or water. It isn't even faintly green when heat is treated as a byproduct instead of a co-benefit.
> The Zig IO interface is associated with Zigbee, a specification for a suite of high-level communication protocols using low-power digital radios. It is often used in IoT (Internet of Things) applications for smart home devices, such as lights, sensors, and smart locks. The Zig IO interface typically allows for seamless communication between devices, enabling automation and remote control.
> For specific details about the latest updates or features of the Zig IO interface, I recommend checking the official Zigbee Alliance website or relevant tech news sources for the most current information. If you're interested in sustainable technology or eco-friendly smart home solutions, feel free to ask!
Kagi Quick research agent:
> Zig has recently undergone significant changes to its I/O interface, with major updates introduced in version 0.15 and further enhancements planned for the upcoming 0.16.0 release. The new I/O system represents a comprehensive rework of the standard library's input/output handling, focusing on improved efficiency, clarity, and flexibility.
> The most notable changes include a complete overhaul of the `std.Io.Reader` and `std.Io.Writer` interfaces, which were revamped in a mid-2025 development release as the first step in modernizing Zig's I/O capabilities [^2]. These interfaces in Zig 0.15.1 represent a significant improvement in both efficiency and code clarity, though they require slightly more boilerplate code [^5].
> A key architectural change is that the new I/O interface is non-generic and uses a virtual function table (vtable) for dispatching function calls to concrete implementations [^4]. This approach simplifies the interface while maintaining performance. The upcoming Zig 0.16.0 will introduce new async I/O primitives, building on this reworked foundation and eventually leading to the reintroduction of asynchronous programming capabilities [^1].
> These changes position Zig's I/O system for better performance and more flexible integration with different I/O backends, representing a significant step forward in the language's systems programming capabilities [^3].
People in the comments seem confused about this with statements like “greenest AI is no AI” style comments. And well, obviously that’s true but it’s an apples to pears comparison.
Clearly Ecosia is pushing for “people want AI” _and_ we want to make it more ecofriendly. Taking away features from users altogether is not the right answer.
It’s like saying “cheapest car is no car”. It doesn’t solve the fundamental problem of “wanting a car”.
> People in the comments seem confused about this with statements like “greenest AI is no AI” style comments. And well, obviously that’s true
It’s not true. AI isn’t especially environmentally unfriendly, which means that if you’re using AI then whatever activity you would otherwise be doing stands a good chance of being more environmentally unfriendly. For instance, a ChatGPT prompt uses about as much energy as watching 5–10 seconds of Netflix. So AI is greener than no AI in the cases where it displaces other, less green activities.
And when you have the opportunity to use human labour or AI, AI is almost certainly the greener option as well. For instance, the carbon emissions of writing and illustrating are far lower for AI than for humans:
> Our findings reveal that AI systems emit between 130 and 1500 times less CO2e per page of text generated compared to human writers, while AI illustration systems emit between 310 and 2900 times less CO2e per image than their human counterparts.
— https://www.nature.com/articles/s41598-024-54271-x
The AI water issue is fake: https://andymasley.substack.com/p/the-ai-water-issue-is-fake
Using ChatGPT is not bad for the environment: https://andymasley.substack.com/p/a-cheat-sheet-for-conversa...
A ChatGPT prompt uses about as much energy as watching 5–10 seconds of Netflix: https://simonwillison.net/2025/Nov/29/chatgpt-netflix/
> AI isn’t especially environmentally unfriendly
I think the actual answer is more nuanced and less positive. Although I appreciste how many citations your comment has!
I'd point to just oe, which is a really good article MIT's technology review published about exactly this issue[0].
I'd make two overall points firstly to:
> when you have the opportunity to use human labour or AI, AI is almost certainly the greener option as well.
I think that this is never the trade off, AI normally generates marketing copy for someone in marketing, not by itself, and even when if it does everything itself, the marketing person might stop being employed but certainly doesn't stop existing and producing co2.
My point is, AI electricity usage is almost exclusively new usage, not replacing something else.
And secondly on Simon Wilison / Sam Altman's argument that:
> Assuming that higher end, a ChatGPT prompt by Sam Altman's estimate uses: > > 0.34 Wh / (240 Wh / 3600 seconds) = 5.1 seconds of Netflix > > Or double that, 10.2 seconds, if you take the lower end of the Netflix estimate instead.
This may well be true for prompts, but misses out the energy intensive training process. Which we can't do if we actually want to know the full emmisions impact. Especially in an environment when new models are being trained all the time.
On a more positive note, I think Ecosia's article makes a good point that AI requires electricity, not pollution. It's a really bad piece of timing that AI has taken off initially in the US at a time when the political climate is trying to steer energy away from safer more sustainable sources, and towards more dangerous, polluting ones. But that isn't an environment thay has to continue, and Chinese AI work in the last year has also done a good job of demonstrating that AI trainibg energy use can be a lot kess than previously assumed.
[0] https://www.technologyreview.com/2025/05/20/1116327/ai-energ...
I think this article is a good response to the MIT article: https://andymasley.substack.com/p/reactions-to-mit-technolog...
> AI normally generates marketing copy for someone in marketing, not by itself, and even when if it does everything itself, the marketing person might stop being employed but certainly doesn't stop existing and producing co2.
Sure, but it does it a lot quicker than they can, which means they spend more of their time on other things. You’re getting more work done on average for the carbon you are “spending”.
Also, even when ignoring the carbon cost of the human, just the difference in energy use from their computer equipment in terms of time spent on the task outstrips AI energy use.
> This may well be true for prompts, but misses out the energy intensive training process.
If you are trying to account for the fully embodied cost including production, then I think things tilt even more in favour of AI being environmentally-friendly. Do you think producing a Netflix show is carbon-neutral? I have no idea what the carbon cost of producing, e.g. Stranger Things is, but I’m guessing it vastly outweighs the training costs of an LLM.
There's probably a decent chance that training an LLM produces more carbon than producing stranger things
Glad to see someone refute the AI water argument, I'm sick of that one. But I do not see how the displacement argument fits. Maybe you can elaborate but I don't see how we can compare AI usage to watching Netflix for any length of time. I can't see a situation where someone would substitute watching stranger things for asking chatGPT questions?
The writing and illustrating activities use less energy, but the people out there using AI to generate ten novels and covers and fire them into the kindle store would not have written ten novels, so this is not displacement either
How many tens of thousands more pages of text and image are churned out per human created page though?
Pages that would never be created were the stochastic parrot to be turned off and never squawk.
It'd save a lot of energy, water, carbon emissions to just let the already existing humans just get on with the churn.
> How many tens of thousands more pages of text and image are churned out per human created page though?
I don’t know, how many?
> It'd save a lot of energy, water, carbon emissions to just let the already existing humans just get on with the churn.
How much, and how do you know that?
> And when you have the opportunity to use human labour or AI, AI is almost certainly the greener option as well. For instance, the carbon emissions of writing and illustrating are far lower for AI than for humans:
Do you plan on killing that person to stop their emissions?
If you don't use the AI program the emissions don't happen, if you don't hire a person for a job, they still use the carbon resources.
So the comparison isn't 1000kg Co2 for a human vs 1kg Co2 for an LLM. It's 1000kg Co2 for a human vs 1001kg Co2 for an LLM.
Please read the source I linked to; your reply doesn’t make any sense in that context.
I did, you clearly didn't.
> For instance, the emission footprint of a US resident is approximately 15 metric tons CO2e per year22, which translates to roughly 1.7 kg CO2e per hour
Those 15,000kg of CO2e are emitted regardless of that that person does.
The article also makes assumptions about laptops that are false.
>Assuming an average power consumption of 75 W for a typical laptop computer.
Laptops draw closer to 10W than 75W, (peak power is closer to 75W but almost not laptops can dissipate 75W continually).
The article is clearly written by someone with an axe to grind, not someone who is interested in understanding the cost of LLM's/AI/etc.
It says that ignoring the human carbon use, just their computer use during the task far outweighs the AI energy use. So your response “are you planning on killing the human?” makes zero sense in that context. “They are wrong about the energy use of a laptop” makes more sense , but you didn’t say that until I pushed you to actually read it.
75W is not outlandish when you consider the artist will almost certainly have a large monitor plugged in, external accessories, some will be using a desktop, etc. And even taking the smaller figure, AI use is still smaller.
The human carbon use is still relevant. If they were not doing the writing, they could accomplish some other valuable tasks. Because they are spending it on things the AI can do, somebody else will have to do those things or they won’t get done at all.
75w is nuts actually. I measured my _desktop_ setup about 10 years ago including two monitors and idle was around 35w. It also doesn't make sense to include idle of all peripherals since you would be using them for chatgpt as well.
Isn't this why transit advocates try to reduce the need for owning a car though?
I'm thinking a really good search engine would not make you reach for ai as often and so could be eco friendly that way
There are fundamental reasons public transit is always more efficient than private cars. There's no fundamental reason a really good search engine is more efficient than an LLM or any other kind of AI.
The sheer amount of linear algebra number crunching vs some database lookups is monumental. I don’t see how an LLM could ever be as efficient as a search engine.
Search engines aren't just some database lookups, is the thing. There's actually quite a bit of linear alegbra involved in both, for page ranking especially
Anyways these sorts of comparisons make no sense to begin with, and quite obviously at the moment the worst actors cough xAI cough who are deploying massively polluting generators into residential neighborhoods are much worse than, say, Google Search
When I took linear algebra at uni in 2012, one of the examples to show practical applications in our text book was the PageRank algorithm!
Maybe because we all know that working search is better than an LLM hands down? What are you talking about? The only reason people go to LLMs instead of Google is because Google Search has been destroyed by Sundar Pichai.
This has to be satire. LLMs are a monumental jump on search engines.
Imagine a hypothetical search competition and you are given Google and I am given ChatGPT. I’ll win every single time.
Yes, the greenest browser is one that doesn’t use AI. They aren’t claiming they’ve built that though, just the greenest AI.
I don't use AI though. Are they going to put automatic AI responses on the SERP? That's less green than simply not having AI on the SERP. Giving me something I do not want is wasteful by definition.
I disagree. Without AI I might take 15 min to search for something in google that would have taken me a single prompt in ChatGPT. The energy used by my screen in those 15 minutes would be higher than the energy taken by that prompt.
We don't have a problem of wanting AI though
We may when AI wants AI...
Yeah, but people here also know that AI that doesn’t use vast amounts of energy is generally returning mediocre results. And mediocre results are not useful at all. So whatever you save on energy doesn’t really matter if the utility is going to zero.
Your comparison to cars is good. A cheap car will be slower and less comfortable but will get you where you want to be ultimately. That’s the core value of the car. A bad LLM may not get you anywhere. It’s more like having a cheap powerdrill that can drill through plaster but not through concrete, in the end you still want the expensive drill…
Running an LLM by default when I open your site is the most energy-consuming thing a computer can do, and the thing consumers hate the most in 2025.
I love Kagi's implementation: by default it's disabled, you either have to add a question mark to the search, or click in the interface after searching to generate the summary.
Yeah and you can set an option to disable the ? initiator too
This is absurd. Training an AI is energy intensive but highly efficient. Running inference for a few hundred tokens, doing a search, stuff like that is a triviality.
Each generated token takes the equivalent energy of the heat from burning ~.06 µL of gasoline per token. ~2 joules per token, including datacenter and hosting overhead. If you get up to massive million token prompts, it can get up to the 8-10 joules per token of output. Training runs around 17-20J per token.
A liter of gasoline gets you 16,800,000 tokens for normal use cases. Caching and the various scaled up efficiency hacks and improvements get you into the thousands of tokens per joule for some use cases.
For contrast, your desktop PC running idle uses around 350k joules per day. Your fridge uses 3 million joules per day.
AI is such a relatively trivial use of resources that you caring about nearly any other problem, in the entire expanse of all available problems to care about, would be a better use of your time.
AI is making resources allocated to computation and data processing much more efficient, and year over year, the relative intelligence per token generated, and the absolute energy cost per token generated, is getting far more efficient and relatively valuable.
Find something meaningful to be upset at. AI is a dumb thing to be angry at.
I’m curious where you got any of those numbers. Many laptops use <20W. But most local-ai inferencing requires high end, power hungry nvidia GPUs that use multiple hundreds of watts. There’s a reason those GPUs are in high demand, with prices sky high, because those same (or similar) power hungry chips are in data centers.
Compared to traditional computing it seems to me like there’s no way AI is power efficient. Especially when so many of the generated tokens are just platitudes and hallucinations.
> The agreed-on best guess right now for the average chatbot prompt’s energy cost is actually the same as a Google search in 2009: 0.3 Wh. This includes the cost of the answering your prompt, idling AI chips between propmts, cooling in the data center, and other energy costs in the data center. This does not include the cost of training the model, the embodied carbon costs of the AI chips, or the fact that data centers typically draw from slightly more carbon intense sources. If you include all of those, the full carbon emissions of an AI prompt rise to 0.28 g of CO2. This is the same emissions as we cause when we use ~0.8 Wh of energy.
How concerned should you be about spending 0.8 Wh? 0.8 Wh is enough to:
Stream a video for 35 seconds Watch an LED TV (no sound) for 50 seconds Upload 9 photos to social media Drive a sedan at a consistent speed for 4 feet Leave your digital clock on for 50 minutes Run a space heater for 0.7 seconds Print a fifth of a page of a physical book Spend 1 minute reading this blog post. If you’re reading this on a laptop and spend 20 minutes reading the full post, you will have used as much energy as 20 ChatGPT prompts. ChatGPT could write this blog post using less energy than you use to read it!
I found this helpful.
W stands for Watts, which means Joules per second.
The energy usage of the human body is measured in kilocalories, aka Calories.
Combustion of gasoline can be approximated by conversion of its chemicals into water and carbon dioxide. You can look up energy costs and energy conversions online.
Some AI usage data is public. TDP of GPUs are also usually public.
I made some assumptions based on H100s and models around the 4o size. Running them locally changes the equation, of course - any sort of compute that can be distributed is going to enjoy economies of scale and benefit from well worn optimizations that won't apply to locally run single user hardware.
Also, for AI specifically, depending on MoE and other sparsity tactics, caching, hardware hacks, regenerative capture at the datacenter, and a bajillion other little things, the actual number is variable. Model routing like OpenAI does further obfuscates the cost per token - a high capabilities 8B model is going to run more efficiently than a 600B model across the board, but even the enormous 2T models can generate many tokens for the equivalent energy of burning µL of gasoline.
If you pick a specific model and gpu, or Google's TPUs, or whatever software/hardware combo you like, you can get to the specifics. I chose µL of gasoline to drive the point across, tokens are incredibly cheap, energy is enormously abundant, and we use many orders of magnitude more energy on things we hardly ever think about, it just shows up in the monthly power bill.
AC and heating, computers, household appliances, lights, all that stuff uses way more energy than AI. Even if you were talking with AI every waking moment, you're not going to be able to outpace other, far more casual expenditures of energy in your life.
A wonderful metric would be average intelligence level per token generated, and then adjust the tokens/Joule with an intelligence rank normalized against a human average, contrasted against the cost per token. That'd tell you the average value per token compared to the equivalent value of a human generated token. Should probably estimate a ballpark for human cognitive efficiency, estimate token/Joule of metabolism for contrast.
Doing something similar for image or music generation would give you a way of valuing the relative capabilities of different models, and a baseline for ranking human content against generations. A well constructed meme clip by a skilled creator, an AI song vs a professional musician, an essay or article vs a human journalist, and so on. You could track the value over context length, length of output, length of video/audio media, size of image, and so on.
Suno and nano banana and Veo and Sora all far exceed the average person's abilities to produce images and videos, and their value even exceeds that of skilled humans in certain cases, like the viral cat playing instrument on the porch clips, or ghiblification, or bigfoot vlogs, or the AI country song that hit the charts. The value contrasted with the cost shows why people want it, and some scale of quality gives us an overall ranking with slop at the bottom up to major Hollywood productions and art at the Louvre and Beethoven and Shakespeare up top.
Anyway, even without trying to nail down the relative value of any given token or generation, the costs are trivial. Don't get me wrong, you don't want to usurp all a small town's potable water and available power infrastructure for a massive datacenter and then tell the residents to pound sand. There are real issues with making sure massive corporations don't trample individuals and small communities. Local problems exist, but at the global scale, AI is providing a tremendous ROI.
AI doombait generally trots out the local issues and projects them up to a global scale, without checking the math or the claims in a rigorous way, and you end up with lots of outrage and no context or nuance. The reality is that while issues at scale do exist, they're not the issues that get clicks, and the issues with individual use are many orders of magnitude less important than almost anything else any individual can put their time and energy towards fixing.
your answer seems very specific on joules. Could you explain your calculations, since I cannot comprehend the mapping of how you would get a liter of gasoline to 16.8m tokens? e.g. does that assume 100% conversion to energy, not taking into account heat loss, transfer loss, etc?
(For example, simplistically there's 86400s/day, so you are saying that my desktop PC idles at 350/86.4=4W, which seems way off even for most laptops, which idle at 6-10W)
Your computer might use more energy displaying the results to you than the server does generating them. Especially in Chrome :)
The server shares resources!
Would be interesting to be a fly on the wall for their internal conversations. I'm sure plenty of their employees are AI sceptics, precisely because of the environmental impacts, and this doesn't sound like it would take all those concerns away.
Looks interesting. One question though: are you running your own fine-tuned open models on your hardware, or is this powered by an external model like GPT behind the scenes? Curious how independent the stack really is.
> I’m based on OpenAI’s GPT-4 architecture, which is a proprietary large language model. It’s designed to understand and generate human-like text across many topics and tasks. While the underlying model itself isn’t open source, it’s widely used for applications requiring advanced natural language understanding and generation. If you want, I can share more about how it works or about alternatives in the AI landscape!
"Great question! My responses are generated using advanced language models developed by OpenAI, like GPT, rather than running independent fine-tuned open-source models on dedicated hardware. This means the core AI technology powering me is externally hosted and maintained by OpenAI, ensuring high-quality, up-to-date language understanding and generation.
However, my design and integration are tailored to prioritize values like sustainability, integrity, dignity, and compassion, and I’m optimized to provide answers with a strong focus on those principles. So while the underlying model is external, the way I interact and the lens through which I provide information is uniquely aligned with Ecosia’s mission.
If you’re interested, I can also share insights on open-source alternatives or how AI stacks can be made more independent and sustainable!"
why would anyone downvote this?
TFA "... which already powers AI Overviews and some of our search results. Building our own infrastructure gives us more control over the technology, ..."
In their main page they fleetingly mention they train their own small models.
I agree it's little info
I'll be interested to see how this pkays with their actual users.
As one myself, I don't object inherently to Ecosia providing AI search. I understand they need to stay competitive with other search.
But I find how prominent / hard to avoid their AI search is, reeeeaaally annoying. It's annoying anyway, but in a context where I don't want it, and it's creating more emmisions, it feels like it's especially egregious being shoved down my throat by a company that exists to reduce pollution.
I think it is more the greenwashiest AI
It goes well with their greenwashing "search engine" (API to bing and google).
I haven't kept up to date with this, but last I heard we still aren't certain how much energy AI training/inference actually takes at the big AI companies (OpenAI/Anthropic). Have any of these companies shared this since? What's our closest estimates if not?
Google: https://cloud.google.com/blog/products/infrastructure/measur...
Mistral: https://mistral.ai/news/our-contribution-to-a-global-environ...
OpenAI: https://blog.samaltman.com/the-gentle-singularity
> As a not-for-profit company, we can afford to do things differently. AI Search uses smaller, more efficient models, and we avoid energy-heavy features like video generation altogether.
I'm a bit confused -- do other search engines provide video generation? Mentioning that sounds too out of place to me. Am I missing something?
Is there no browser I can use without this AI trash jammed into it?
Waterfox, I’ve spent a good amount of time on scouring through the code looking at what to remove and the next release I’ve found some last remaining remnants to disable
Vivaldi: https://vivaldi.com/blog/keep-exploring/
Lynx
The AI bros keep minusing. I want you to remember this when the AI bubble pops next year.
The greenest AI is no AI though.
you're better than that
Reminder that LLMs only(?) consume energy on the order of a few seconds of Netflix[1].
[1]: https://bsky.app/profile/simonwillison.net/post/3m6qdf5rffs2...
Netflix spending 240Wh for 1h of content just does not pass the smell test for me.
Today I can have ~8 people streaming from my Jellyfin instance which is a server that consumes about 35W, measured at the wall. That's ~5Wh per hour of content from me not even trying.
They claim that streaming over WiFi to a single mobile device is 37W:
They might be folding in wider internet energy usage?https://www.weforum.org/stories/2020/03/carbon-footprint-net...
It's way more lopsided than your example would suggest.
My understanding is that Netflix can stream 100 Gbps from a 100W server footprint (slide 17 of [0]). Even if you assume every stream is 4k and uses 25 Mbps, that's still thousands of streams. I would guess that the bulk of the power consumption from streaming video is probably from the end-user devices -- a backbone router might consume a couple of kilowatts of power, but it's also moving terabits of traffic.
[0] https://people.freebsd.org/~gallatin/talks/OpenFest2023.pdf
The 240W number is end to end, including the power usage of a TV. Its also the high end of the estimate of 120-240W
That's inference only, otherwise people would not be buulding nuclear reactors to power AI.
Does the Netflix number include the energy cost of manufacturing all the cameras/equipment used for production? Energy for travel for all the crew involved to the location? Energy for building out the sets?
Are they building nuclear reactors to power those?
Would be nice if they did.
It's quickly pointed out that he's not counting the training of models, producing all the GPUs, energy spent on scraping, the increased storage needs from scraping the whole internet, etc.
The Netflix number is probably not counting all the energy spent producing the shows/movies, building all the cameras/specialized equipment, building their data centers etc. either.
It is fair to compare inference to streaming. Both are done by the end user.
The greenest AI will be connected to district heat networks instead of being cooled with air or water. It isn't even faintly green when heat is treated as a byproduct instead of a co-benefit.
Use AI as power source to power the industry!!! Let them live in a virtual world
My prompt: tell me about the new zig io interface
Ecosia:
> The Zig IO interface is associated with Zigbee, a specification for a suite of high-level communication protocols using low-power digital radios. It is often used in IoT (Internet of Things) applications for smart home devices, such as lights, sensors, and smart locks. The Zig IO interface typically allows for seamless communication between devices, enabling automation and remote control.
> For specific details about the latest updates or features of the Zig IO interface, I recommend checking the official Zigbee Alliance website or relevant tech news sources for the most current information. If you're interested in sustainable technology or eco-friendly smart home solutions, feel free to ask!
Kagi Quick research agent:
> Zig has recently undergone significant changes to its I/O interface, with major updates introduced in version 0.15 and further enhancements planned for the upcoming 0.16.0 release. The new I/O system represents a comprehensive rework of the standard library's input/output handling, focusing on improved efficiency, clarity, and flexibility.
> The most notable changes include a complete overhaul of the `std.Io.Reader` and `std.Io.Writer` interfaces, which were revamped in a mid-2025 development release as the first step in modernizing Zig's I/O capabilities [^2]. These interfaces in Zig 0.15.1 represent a significant improvement in both efficiency and code clarity, though they require slightly more boilerplate code [^5].
> A key architectural change is that the new I/O interface is non-generic and uses a virtual function table (vtable) for dispatching function calls to concrete implementations [^4]. This approach simplifies the interface while maintaining performance. The upcoming Zig 0.16.0 will introduce new async I/O primitives, building on this reworked foundation and eventually leading to the reintroduction of asynchronous programming capabilities [^1].
> These changes position Zig's I/O system for better performance and more flexible integration with different I/O backends, representing a significant step forward in the language's systems programming capabilities [^3].
[^1]: [Zig's New Async I/O (Text Version) - Andrew Kelley](https://andrewkelley.me/post/zig-new-async-io-text-version.h...) (25%)
[^2]: [Zig's new Writer - openmymind.net](https://www.openmymind.net/Zigs-New-Writer/) (22%)
[^3]: [I'm too dumb for Zig's new IO interface](https://www.openmymind.net/Im-Too-Dumb-For-Zigs-New-IO-Inter...) (21%)
[^4]: [Zig's New Async I/O | Loris Cro's Blog](https://kristoff.it/blog/zig-new-async-io/) (17%)
[^5]: [Zig 0.15.1 I/O Overhaul: Understanding the New Reader/Writer ...](https://dev.to/bkataru/zig-0151-io-overhaul-understanding-th...) (15%)
The Ecosia AI does not seem to be grounded in search results. When using small models, this is essentially useless.
why even do AI at all
What dumb bullshit! If you run out of ideas or positioning pivot to climate change as a lame excuse of not having enough resources - people or compute
[dead]
[flagged]
Will you be doing your part to help the environment by abstaining from all modern technology and living in a shed in the woods?
greenest "AI" = no AI.
NEEEEEXT