linux gateway and router = redundant?

ME dugan at passwall.com
Sat Mar 16 00:34:34 PST 2002


On Fri, 15 Mar 2002, augie wrote:
> that is 'web proxy'. let's say i decide to run squid on the gateway to cache 
> webpages to speed up websurfing even more. now i wonder if i should be more 
> concerned with memory, hard disk storage, and proccessor speed?

I have not tried squid on a low-end PC, but would expect it to be much
like a web server of simple content for system use (not so much
SSI/CGI/SHTML, but lots of TXT and HTML for example):

Memory, Speed of disk, motherboard/NIC (Listed in order of importance) but
CPU almost not an issue.

Memory first, as Buffers and cache can increase with more memory. Also, as
more users on your side of the network look to use more proxy "sessions"
memory demands may increase (close to linear over time per user.) Extra
memory can make up to some extent for slow disks.

Speed of disk second, as locating and sending the files can become a
burden while other services (assumed semi-busy) seek to use the disk (Also
goes back to more memory allowing this impact to be diminished).

Then exmaination of motherboard/NIC. Faster Motherboard clock speed and
NIC may allow for faster data-flow and pushing. Even still, this is less
important than memory or speed of disk. (SCSI Disks often faster than IDE
for example.)

CPU would be expected to be remarkably low in consumption with only a few
home users of a caching proxy. As users increase and/or user demands
increase, CPU utilization will of course increase. However, I would expect
enabling site-wide Server Side Includes and server parsed CGI/HTML on your
web server to be a *much* greater drain than the per user CPU resources of
a caching proxy.

The slowest machine I ran Squid on was a PPro 200 with 128MB of RAM along
with a web server and ssh. (It was a server class machine with SCSI disks
and high speed (for its time) motherboard and NIC. Even under heavier use
with the proxy and web server we seldom broke a load average of 0.03 with
most time being at/below 0.01

I do not recall it being slow, or impacted, but this was a PPro with
memory and high speed disks and served mostly straight TXT/HTML content
(few dirs allowed for SSI, CGI, SHTML...)

Frank has done some pretty cool things, and he may have had some
experience with running squid closer to system limits. Since my statements
above are educated guesses based on fragmented distant memories, if Frank
offers something on this more recent to contradict me, he will likely have
the more correct answer.

-ME

-----BEGIN GEEK CODE BLOCK-----
Version: 3.12
GCS/CM$/IT$/LS$/S/O$ !d--(++) !s !a+++(-----) C++$(++++) U++++$(+$) P+$>+++ 
L+++$(++) E W+++$(+) N+ o K w+$>++>+++ O-@ M+$ V-$>- !PS !PE Y+ !PGP
t at -(++) 5+@ X@ R- tv- b++ DI+++ D+ G--@ e+>++>++++ h(++)>+ r*>? z?
------END GEEK CODE BLOCK------
decode: http://www.ebb.org/ungeek/ about: http://www.geekcode.com/geek.html



More information about the talk mailing list