that the current "servers and AI are going to suck all of the energy produced by the grid in XXX months or years" hysteria... is BS.
That will never happen. Period.
What happens here is that various companies or organizations look at what they want to solve ( either math equations or AI "relationship evaluations" and the LLMs that go with that )... and then someone says "that will take X number of thousand GB200 racks ( picking on Nvidia here ) or NVL72 racks,, and those racks take Y amount of electricity to run ( GPUs, CPUs, memory, storage, and cooling for all of it ) and extrapolate into the future to meet the "demand" postulated.
Three things happen... Number 1. The next generation of CPU, GPU, memory, storage, etc takes much less energy to produce the same result, 2) The work of some very clever people, smarter than me, figures out how to achieve the same result but with less calculations ( quantum computing, or just better equations ), and/or 3) The demand never materializes... or is delayed long enough to be solved by number 1 and number 2.
I happen to be very good at Number 1. And I can promise the next generation AI servers ( and math supercomputers ) will use a lot less power to do the same job as current state of the art machines do. I assume #2 and #3 will also happen given the amount of money involved.
BTW, I would take a reasonable guess that all of DU ( web server, archives, etc ) could easily be run on about 100 watts of electricity today... using newer SBCs and SSD storage. Not including all of the power that drives the routers, switches, wifi towers, etc to deliver DU to your phone or laptop.
Pretty sure I could host DU using no more than a few RUs of rack space and SBCs.
Someone would have to give me traffic stats, amount of data in the active and archive storage, etc. But if we are talking under 1 PB of storage and under a GB/sec of active requests... yeah, 100 watts will do the trick. Maybe 200 ( just in case someone wants to make a bet ).