This penguin walks on a bed of blue screens of death!

ME dugan at passwall.com
Fri Oct 5 23:03:44 PDT 2001


> On some day, ME wrote:
> >That is good for the local server sysem, but only with the test
> >server. Once you start adding clients and users using those clients
> >booting, and loading files/libs, things will slow down.

On Fri, 5 Oct 2001, Lincoln Peters wrote:
> Is that still true if the libraries for the clients are all on the sdb disk?

Yes and no. If you have sufficient amount of memory on the server, then
you can have the linux kernel (automatically) cache read-only files in
memory and decrease disk access. Also, you can add memory to nicer SCSI
cards for caching and possibly gain a little speedup for frequently read
files on read-only filesystems. However, though SCSI card memory caching
is faser than disk access, it is not as fas as kernel/memory caching for
access times.

If you have tons of memory, then mounted ramdisks with gigabytes of space
and stuff copied from HD can work too.

With only 16Mb of RAAM, I doubt you have sufficient space in memory for
all of the services plus lots of disk caching for libs and such on the
server side. (Clients may still be able to cache libs, but each client
will need to read the lib at least once after boot, and the smaller the
memory on the clients, the more likely they will need to re-read the same
files fro he NFS share. Client caching will decrease the biggest slow-down
(network) so lots of client memory is good for this.)

> Do you (or anyone else) know anything about bonding Ethernet devices?  If I 
> could get the other two NIC cards to do that, I'd re-install them.  If the 
> network supports >10Mbps (I honestly don't know, but it should), 30Mbps will 
> definitely be better than 10Mbps.  I would try to bump it up to 4 NIC's, but 
> there aren't enough expansion slots for a fourth (unless I remove the video 
> card).

Um, channel bonding with ethernet may offer only a small amount of
performance increase if the multiple interfacers all use the same shared
ethernet segments/collision domain (Assuming HUB first). (All you gain is
really increased chance for access to the media as you have 2 or more
cards signalling for chances to rite to the shared media. On a busy
network, this may hamper speed, as both cards may encounter use of CSMA/CD
and do that whole back-off random times more often after collisions before
attempting to re-send (both will likely have someting to send to the
shared media at the same time.) Also, the overhead for the kernel's
multiplexing and demultiplexing must be included even if it is fast and
not "true multiplexing/demutiplexing" like "eql").

Channel bonding is likely more effective when multiple NICs each use their
oen private ethernet segment to another server or less effective
on separate private VLANs on a switched network, and even less effective
on simple switched networks, but probably *way* more effective than a
simple hub based/repeater only network.

<pre>
ASCII Art: Here is how I believe that channel boding was
originally supposed to work:

Different private linkx for each host to host network:

       |=NIC1a----NIC1b=|
       |                |
host1--|=NIC2a----NIC2b=|--host2
       |                |
       |=NIC3a----NIC3b=|
</pre>

describes how I see channel bonding working optimally for ethernet from
one host to another, or from one server to another.

<pre>
ASCII Art:This is what some people see for channel bonding:
(
            ---HUB---NIC0b=|--host5
               HUB
       |=NIC1a-HUB---NIC1b=|--host2
       |       HUB
srvr1--|=NIC2a-HUB---NIC2b=|--host3
       |       HUB
       |=NIC3a-HUB---NIC3b=|--host4
               HUB
            ---HUB---NIC4b=|--host6
</pre>

If the channel bonding uses cloned MAC addresses, and a HUB is used, I see
loses in performance with multiple NICs competing for network access at
the same time, and risk for collisions being higher. You also have
processor overhead for recombination/sorting. The only speedup I see is
network utilization. If one card is getting (burst max) 60% utilization of
your network, and you add another card to the same collision domain, then
you may be able to incease network utilization to say (guess) (burst max)
80% between both cards, but at the cost of increased collisions, and
performance degredaton for other hosts on the shared segment.

Change the above "HUB" to an  "ETHERSWITCH" that keeps the MAC
address/port/datestamp for each port, then a shared MAC may allow a
greater performance increase than the HUB, but this may cause headaches on
the side of the switch as you have mutiple ports with the same cloned
MAC! the normal resolution on switches is to just update the insternal
MAC/Port db to reflect the new changes to what MAC belongs to what port,
and then send ethernet frames with a DST to that port instead of the old
port that was in use before. But what does this do? The last port to send,
is more likely to still be sending when an incoming ethernet frame for the
"server" is meant to be sent to the server. This packet destined for the
server cant make it to the server while the senbding port from the server
is active and sending to the switch (even though the other "free" bonding
ports might be available.

If the switch creates virtual paths between ports for "connections" it
sees in layer 3/4 (now layer 3/4 etherswitch has IP), and can manage to
somehow remember that one session (oo another layer!) is assigned to one
port, then you could have a higher layer of load balancing at the cost of
packet inspection (delays/latency). This could be made to work with
control to code your own layer 3/4 switch, but they are expensive, and
what vendor will let you code thier firmware and let you have access to
their specs? (There may be a complete package to do this out there an
alieviate the need for you to code your own changes, but layer 3/4
switches are more expensive.)

Use of etherswitch options "store and forward" vs "cut-through" may offer
some help for host that is sending on one interface and frames coming into
the switch that need to go to the server, but each of those has their own
impact on the switch too. :-/

Using a switch in setting of VLANs where you have one NIC for each VLAN
and then use the top diagram for two servers could possibly work, if the
switch could work with the channel bonding.

If you want the server to have all of its NICs share the same network
segments as all of the hosts, then I see an Etherswitch as being more
capable of making this "work" than a HUB,but would then question, "Can all
etherswitches deal with cloned MACs on different ports equally?" and
"Are there switches that allow you to 'break the rules' and assign more
than one MAC per port, and send a frame with DST to that MAC on the port
that is least busy?"

This message includes a lot of "thinking online". Anyoe/everyone feel free
to add your comments as I have not spent much time thinking about
this. I know we have some networking people here, and some of them might
have more insight to this. After all, I am still learning.

-ME

-----BEGIN GEEK CODE BLOCK-----
Version: 3.12
GCS/CM$/IT$/LS$/S/O$ !d--(++) !s !a+++(-----) C++$(++++) U++++$(+$) P+$>+++ 
L+++$(++) E W+++$(+) N+ o K w+$>++>+++ O-@ M+$ V-$>- !PS !PE Y+ !PGP
t at -(++) 5+@ X@ R- tv- b++ DI+++ D+ G--@ e+>++>++++ h(++)>+ r*>? z?
------END GEEK CODE BLOCK------
decode: http://www.ebb.org/ungeek/ about: http://www.geekcode.com/geek.html
     Systems Department Operating Systems Analyst for the SSU Library



More information about the talk mailing list