[NBLUG/talk] top output help
Walter Hansen
gandalf at sonic.net
Wed Apr 25 11:00:18 PDT 2007
The application establishes a 3270 connection over a raw port, hmm
actually I'm using Net::Telnet, here's my uses actually:
use strict;
use Net::Telnet ();
use Convert::EBCDIC;
use POSIX qw(setsid);
use IPC::Shareable;
use Time::HiRes qw(usleep gettimeofday tv_interval);
The program creates a session with a distant server and makes requests as
fast as possible. During critical times the responses to requests from the
distant server take ten or more seconds. So we have 295 connections to
this server making requests at the same time. The load on the processor is
tiny as at any time only a few of the processes are actually doing
anything.
I don't know where I'd start with a monolith application. It would have to
keep track of 295 Telnet connections (or more) in a very exacting manner.
I had originally planned a control program that would work with slave
programs, but have found that simply the slaves alone work well so long as
they communicate (I'm using IPC::Shareable).
I just know there's a point at which the processes themselves will be
swapped out and performance will go from a non issue to the issue real
quick. I could probably work on making the system run more lean also,
even if it's only during critical times. I'm including a non-critical time
top below.
If I shut down sendmail and then restart it will mail that was sent during
the time it was shut down go out?
I wonder where that perl compiler project is? Hmmm, doesn't look like it
would help.
top - 12:47:39 up 11 days, 16:27, 2 users, load average: 0.00, 0.00, 0.00
Tasks: 51 total, 2 running, 49 sleeping, 0 stopped, 0 zombie
Cpu(s): 0.3% us, 0.0% sy, 0.0% ni, 99.7% id, 0.0% wa, 0.0% hi, 0.0% si
Mem: 969372k total, 73048k used, 896324k free, 23884k buffers
Swap: 522072k total, 15396k used, 506676k free, 21932k cached
PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND
1 root 16 0 1580 452 424 S 0.0 0.0 0:01.59 init
2 root RT 0 0 0 0 S 0.0 0.0 0:00.00 migration/0
3 root 34 19 0 0 0 S 0.0 0.0 0:00.00 ksoftirqd/0
4 root 10 -5 0 0 0 S 0.0 0.0 0:00.10 events/0
5 root 20 -5 0 0 0 S 0.0 0.0 0:00.02 khelper
6 root 11 -5 0 0 0 S 0.0 0.0 0:00.00 kthread
8 root 20 -5 0 0 0 S 0.0 0.0 0:00.00 kacpid
94 root 10 -5 0 0 0 S 0.0 0.0 0:01.51 kblockd/0
129 root 15 0 0 0 0 S 0.0 0.0 0:12.26 pdflush
130 root 15 0 0 0 0 S 0.0 0.0 0:06.58 pdflush
132 root 17 -5 0 0 0 S 0.0 0.0 0:00.00 aio/0
131 root 15 0 0 0 0 S 0.0 0.0 0:15.94 kswapd0
720 root 15 0 0 0 0 S 0.0 0.0 0:00.00 kseriod
796 root 10 -5 0 0 0 S 0.0 0.0 0:00.00 ata/0
800 root 18 0 0 0 0 S 0.0 0.0 0:00.00 scsi_eh_0
801 root 18 0 0 0 0 S 0.0 0.0 0:00.00 scsi_eh_1
823 root 15 0 0 0 0 S 0.0 0.0 0:02.53 md0_raid1
824 root 16 0 0 0 0 S 0.0 0.0 0:31.85 kjournald
966 root 21 -4 1580 360 356 S 0.0 0.0 0:00.11 udevd
1124 root 15 0 0 0 0 S 0.0 0.0 0:00.00 khubd
2057 root 16 0 1624 528 464 S 0.0 0.1 0:08.48 syslogd
2065 root 16 0 2384 412 408 S 0.0 0.0 0:00.05 klogd
2079 messageb 15 0 2240 464 460 S 0.0 0.0 0:00.01 dbus-daemon-1
2094 root 16 0 4060 848 620 S 0.0 0.1 0:00.95 hald
2109 root 16 0 2820 544 476 S 0.0 0.1 0:04.06 nifd
2196 nobody 16 0 11436 540 508 S 0.0 0.1 0:00.00 mDNSResponder
2214 daemon 16 0 1700 472 440 S 0.0 0.0 0:00.00 atd
2311 root 20 0 2252 652 648 S 0.0 0.1 0:00.00 _plutorun
2312 root 20 0 2252 652 648 S 0.0 0.1 0:00.00 _plutorun
2313 root 16 0 6008 1860 964 S 0.0 0.2 0:32.12 pluto
2314 root 17 0 2212 652 648 S 0.0 0.1 0:00.00 _plutoload
2315 root 16 0 1560 412 408 S 0.0 0.0 0:00.00 logger
2356 root 26 10 2536 624 556 S 0.0 0.1 0:03.98 pluto
2381 root 19 0 1500 196 192 S 0.0 0.0 0:00.00 _pluto_adns
2580 root 16 0 7208 956 740 S 0.0 0.1 0:03.56 sendmail.sendma
2595 mail 16 0 5952 752 628 S 0.0 0.1 0:00.11 sendmail.sendma
2616 root 16 0 1612 496 448 S 0.0 0.1 0:00.16 crond
2637 root 16 0 8336 2016 1608 S 0.0 0.2 0:00.38 miniserv.pl
2652 root 18 0 1560 364 360 S 0.0 0.0 0:00.00 mingetty
2655 root 18 0 1564 364 360 S 0.0 0.0 0:00.00 mingetty
2656 root 18 0 1564 364 360 S 0.0 0.0 0:00.00 mingetty
2657 root 18 0 1564 364 360 S 0.0 0.0 0:00.00 mingetty
2658 root 18 0 1560 364 360 S 0.0 0.0 0:00.00 mingetty
2659 root 18 0 1564 364 360 S 0.0 0.0 0:00.00 mingetty
21882 root 16 0 4320 700 596 S 0.0 0.1 0:00.22 sshd
27448 root 16 0 6984 2040 1684 S 0.0 0.2 0:00.05 sshd
27450 gandalf 16 0 7064 2008 1648 R 0.0 0.2 0:00.02 sshd
27451 gandalf 15 0 3156 1828 1216 S 0.0 0.2 0:00.03 bash
27492 root 17 0 2560 1192 968 S 0.0 0.1 0:00.00 su
27493 root 16 0 2640 1560 1212 S 0.0 0.2 0:00.03 bash
27545 root 16 0 2020 1036 816 R 0.0 0.1 0:00.01 top
--
This would be real cute NSFW tagline, but I'm married and would get killed.
>
> On Tue, April 24, 2007 23:10, Walter Hansen wrote:
>
>> The server runs multiple instances of a perl application that I wrote. I
>> did manage to cut it's memory footprint by almost 10% today, but the
>> bosses just added 100 instances. I cranked it up to see what it would do
>> and things didn't creep, but stuff didn't seem all that happy ether. At
>> the time of this top it was running 295 instances. I think each instance
>> sucks up 5400 (is it k or bytes in top?) of footprint (RES); got that
>> down from 5900. It's run on a 1U server at sonic and I've been told that
>> we can't upgrade the memory.
>>
>> I've been thinking about going through the code and optimizing it for
>> memory usage. This would probably be useful, but I don't think I could
>> squeeze it down more than another 10 or 20%.
>
> Could it be re-written as a monolithic, iterating app, rather than as
> separate instances? Even if the app were 10x the size, if the per-
> iteration cost were small enough it'd be worth it, given that "the
> bosses" are prone to "just add 100 instances."
>
> You'd also avoid a fair bit of context-switching overhead &c.
>
> Run it at an elevated priority, and it'd still get lots of cycles...
>
> All "in theory" of course, and "maybe," since I don't knowing any
> details of the applet or its function...
>
>
> - Steve S.
>
>
>
> _______________________________________________
> talk mailing list
> talk at nblug.org
> http://nblug.org/cgi-bin/mailman/listinfo/talk
>
>
More information about the talk
mailing list