Dan Magenheimer <dan.magenheimer@xxxxxxxxxx> writes:
> > With the move to Xen, suddenly the heavy user was the only user
> > seeing the slowness. Now the heavy user has the option of paying
> > me more money for more ram to use as disk cache, or of dealing with it
> > being slow. Light users had no more trouble. Log in once
> > every 3 months?
> > your /etc/passwd is still cached from last time.
>
> Am I understanding this correctly that you are "renting" a
> fixed partition of physical RAM that (assuming the physical
> server never reboots) persistently holds one VSP customer's
> VM's memory forever, never saved to disk?
Yes. Exactly. If you rent a 1 GB VPS from me, the way I see it,
you are renting 1/32nd of one of my 32GiB servers. (and paying a
premium for the privlege) Because the cost of giving you extra CPU
when nobody else wants it, I'll give you up to a full core, if
nobody else needs it, so that's a small bonus.
> Although I can see this being advantageous for some users,
> no matter how cheap RAM is, having RAM sit "idle" for months
> (or even minutes) seems a dreadful waste of resources,
> which is either increasing the price of the service or the
> cost to the provider for a very small benefit for a
> small number of users. I see it as akin to every VM
> computing pi in a background process because, after all,
> the CPU has nothing better to do if it was going to be
> idle anyway.
wait what? the difference is if you aren't using the CPU, I can take
it away, and then give it back to you when you want it almost immediately,
with a small cost (of flushing the cpu cache, but that is fast enough
that while it's a big deal for scientific type applications, it doesn't
really make the percieved responsiveness of the box worse, unless you
do it a bunch of times in a small period of time.)
Ram is different. If I take away your pagecache, either I save it to
disk (slow) and restore it (slow) when I return it, or I take it from you
without saving to disk, and return clean pages when you want it back,
meaning if you want that data you've got to re-read from disk. (slow)
By slow, I mean slow enough that you notice. you type a command and sit,
wondering what the problem with this cheap peice of crap you rented from
me is, while the disk seeks.
Hitting disk brings the performance of nearly anything well into
'unacceptable' even when you use the expensive 10K disks, especially
when you have a bunch of people hitting those same disks.
(and I and all competitors I know of within an order of magnitude of
my pricing use 7500rpm sata, exasterbating the problem, but the difference
between 10K sas and 7.5k sata is not many orders of magnitude like the
difference between ram and disk is)
This does not help 'a few users' this massively increases the percieved
responsiveness of nearly all VPSs. what if you only get a website hit
every 10 minutes? would you be satisfied if that hit took north of a second
to return because it had to hit disk every time? I wouldn't.
would you complain if there was often north of a 1500ms delay between
when you type a command and when you got a responce? I can tell you
that my customers did, when I used a shared pagecache. (and yeah,
that was on 10K fibre disks in raid 1+0)
solving these problems is what pagecache is for.
> While I can see how the current sorry state of memory management
> by OS's and hypervisors might lead to this business decision,
> my goal is to make RAM a much more "renewable" resource.
> The same way CPU's are adding power management so that
> they can be shut down when idle even for extremely small
> periods of time to conserve resources, I'd like to see
> "idle memory" dramatically reduced. Self-ballooning and
> tmem are admittedly only a step in that direction, but
> at least it is (I hope) the right direction.
I keep saying, Pagecache is not idle ram. Pagecache is essential to the
perception of acceptable system performance. I've tried selling service
(on 10K fibre disk, no less) with shared pagecache, and by all reasonable
standards, performance was unacceptable.
_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel
|