[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: DATACENTER: Cooling and Cost Data for Web Hosting Data Centers






Oh boy. Some real shit on the list!! :) I got the beers at NANOG!!!

Anyhow, my numbers are based on a 25% build, to be safe, based
on what I know I have maxed out a rack at, then aggregated and mean
averaged over growth. 

I do think it's reasonable.



> 
> On Wed, 8 Dec 1999, Dave Siegel wrote:
> 
> > > You take the AVERAGE power consumption of all the racks. But you
> > > knew that. Or, *ahem*, I hope you did.
> > 
> > It should be obvious based on my case study that I don't build
> > datacenters, only the network portion of them.  ;-)  I'm more an IP
> > geek than a facilities geek (and gosh, I hope to stay that way!)
> 
> Oh, my bad.
> Sorry.
> You're "one of them"
> heh heh heh.
> 
> It's all good, I've gotten to the point that you network people just
> *confuse* me.  Espcially when you try to talk power and cooling and the
> such.  
> 
> > I was responding to the issue of A/rack.  I still don't think
> > 100A/rack is unreasonable.
> 
> As an average?  That's absurd.  See below, re: adding drops.
> 
> > At some point, your fuses for your 110VAC
> 
> (don't say fuses, they're breakers)
> 
> > whips come into play, and based on the ever increasing density of
> > CPUs/rack and increasing power requirements of telecom equipment (more
> > optical connections & lasers to power up/rack), going overkill is a
> > good idea.
> 
> Ah, OK, Then we get into the whole discussion about RPP's and other power
> distribution.  What people (and this is not a cut on you....) don't
> realize is that a datacenter is not a static environment.  A properly
> designed facility will make additional power drops a 10 min job.
> 
> -- 
> Ken Woods
> [email protected]
> "Used to be a geek.  Now I'm a facilities guy."
> "ANd yes, I'm *really* going to bed now"
>