Recovered from the older tannerjc.net wiki snapshot dated January 23, 2016.

Observations

  • CPU has the most effect on queing/processing speed.
  • Printing a large number of jobs uses very little system ram.
  • The daemon appears to process/complete jobs concurrently as they are continually spooled.
  • Every job writes two files in /var/spool/cups and is going to cause intensive IO for a large amount of jobs.
  • PreserveJobFiles No and PreserveJobHistory No in /etc/cups/cupsd.conf causes the control and job files to be deleted once the job is complete which is a good practice to prevent the directory from filling up and breaking the que.
  • Put /var/spool/cups on a fast dedicated block device (raid, ssd, ramdrive).
  • If the spool directory is full, cups will refuse to que the job.
  • Once the spool has enough space to accept the incoming job, it will que the job and begin incrementing the job count again.

Notes

  • Checking printer status
[root@localhost ~]# lpc status dummy  | fgrep -i -e queuing -e printing
	queuing is enabled
	printing is enabled
  • Enabling or disabling a printer cupsdisable dummy cupsenable dummy

  • Completed Job messages

I [20/Aug/2010:07:44:49 -0400] [Job 121862] Completed successfully.
I [20/Aug/2010:07:44:49 -0400] [Job 121863] Completed successfully.
I [20/Aug/2010:07:44:50 -0400] [Job 121864] Completed successfully.
I [20/Aug/2010:07:44:50 -0400] [Job 121865] Completed successfully.
I [20/Aug/2010:07:44:50 -0400] [Job 121866] Completed successfully.
I [20/Aug/2010:07:44:50 -0400] [Job 121867] Completed successfully.
  • No freespace in /var/spool/cups
[root@localhost ramdisk]# lp data1.txt
lp: Request Entity Too Large
[root@localhost ~]# tail  /var/log/cups/error_log
E [20/Aug/2010:05:19:47 -0400] cupsdReadClient: Unable to write 32768 bytes to /var/spool/cups/00000149: Success
E [20/Aug/2010:05:19:47 -0400] cupsdReadClient: Unable to write 16000 bytes to /var/spool/cups/0000014a: Success
E [20/Aug/2010:05:19:47 -0400] cupsdReadClient: Unable to write 16000 bytes to /var/spool/cups/0000014b: Success
E [20/Aug/2010:05:19:47 -0400] cupsdReadClient: Unable to write 16000 bytes to /var/spool/cups/0000014c: Success
E [20/Aug/2010:05:19:47 -0400] cupsdReadClient: Unable to write 16000 bytes to /var/spool/cups/0000014d: Success

Initial setup

[root@localhost ~]# cat /etc/redhat-release
Red Hat Enterprise Linux Server release 5.5 (Tikanga)
[root@localhost ~]# rpm -q cups
cups-1.3.7-18.el5
  • Start collecting stats
[root@localhost ~]# yum -y install sysstat

[root@localhost ~]# cat /etc/cron.d/sysstat
# run system activity accounting tool every 10 minutes
*/1 * * * * root /usr/lib64/sa/sa1 1 1
# generate a daily summary of process accounting at 23:53
53 23 * * * root /usr/lib64/sa/sa2 -A

[root@localhost ~]# cat /usr/lib64/sa/sa1 | egrep -v -e ^\$ -e ^\#
umask 0022
ENDIR=/usr/lib64/sa
cd ${ENDIR}
if [ $# = 0 ]
then
	exec ${ENDIR}/sadc -d -F -L 1 1 -
else
	exec ${ENDIR}/sadc -d -F -L $* -
fi
  • Create dummy text printer
[root@localhost ~]# cat /etc/cups/printers.conf
# Printer configuration file for CUPS v1.3.7
# Written by cupsd on 2010-08-19 12:16
Printer dummy
Info dummy
Location dummy
DeviceURI file:/dev/null
State Idle
StateTime 1282234589
Accepting Yes
Shared Yes
JobSheets none none
QuotaPeriod 0
PageLimit 0
KLimit 0
OpPolicy default
ErrorPolicy retry-job
/Printer
  • Test a simple printjob
[root@localhost ~]# export PRINTER=dummy
[root@localhost ~]# lp install.log

[root@localhost ~]# head /var/spool/cups/d00001-001
Installing libgcc-4.1.2-48.el5.x86_64
warning: libgcc-4.1.2-48.el5: Header V3 DSA signature: NOKEY, key ID 37017186
Installing setup-2.5.58-7.el5.noarch
Installing filesystem-2.4.0-3.el5.x86_64
Installing basesystem-8.0-5.1.1.noarch
Installing cracklib-dicts-2.8.9-3.3.x86_64
Installing nash-5.1.19.6-61.x86_64
Installing 1:termcap-5.5-1.20060701.1.noarch
Installing tzdata-2010e-1.el5.noarch
Installing glibc-common-2.5-49.x86_64

[root@localhost ~]# strings /var/spool/cups/c00001  | tail -n 15
-urn:uuid:ac88dd4d-e71c-37c9-5320-a4cfc06a683dB
job-originating-host-name
	localhost!
time-at-creation
Llj*
time-at-processing
time-at-completed
job-id
	job-state
job-media-sheets-completed
job-printer-uri
.ipp://localhost.localdomain:631/printers/dummyB
job-name
install.log!
job-k-octets
  • Prepare to write code
[root@localhost ~]# mkdir code
[root@localhost ~]# cd code
[root@localhost code]# svn checkout --username jtanner http://svn.tannerjc.net/perl-examples
[root@localhost perl-examples]# yum -y install gcc
[root@localhost perl-examples]# perl -MCPAN -e shell
get String::Random
install String::Random
get Net::Printer
install Net::Printer
get Time::HiRes
install Time::HiRes
lp: Too many active jobs.
lp: Too many active jobs.
lp: Too many active jobs.
lp: Too many active jobs.
  • Need to set the MaxJobs directive in cupsd.conf
lp: Request Entity Too Large
lp: Request Entity Too Large
  • Something is wrong with the spool directory (I was trying to rm -rf /var/spool/cups)

  • Restarting the daemon resolved the problem

  • Example run

[root@localhost perl-examples]# perl cups-bench.pl
request id is dummy-2731 (1 file(s))
request id is dummy-2732 (1 file(s))
request id is dummy-2733 (1 file(s))
...
request id is dummy-19752 (1 file(s))
request id is dummy-19753 (1 file(s))
request id is dummy-19754 (1 file(s))
request id is dummy-19755 (1 file(s))
request id is dummy-19756 (1 file(s))
request id is dummy-19757 (1 file(s))
request id is dummy-19758 (1 file(s))
request id is dummy-19759 (1 file(s))
file size: 1011712b
file create: 1s
file copy:   0s
time for 15000: 952s
  • top
top - 09:51:41 up 16:21,  5 users,  load average: 2.68, 1.36, 0.76
Tasks: 119 total,   2 running, 117 sleeping,   0 stopped,   0 zombie
Cpu(s): 34.9%us, 48.9%sy,  0.0%ni,  0.0%id, 13.0%wa,  0.0%hi,  3.2%si,  0.0%st
Mem:   1153596k total,  1145772k used,     7824k free,     8692k buffers
Swap:  2326520k total,       72k used,  2326448k free,   911180k cached

  PID USER      PR  NI  VIRT  RES  SHR S %CPU %MEM    TIME+  COMMAND
 2507 root      15   0  267m  10m 6100 S 14.0  0.9   0:34.15 eggcups
16548 root      15   0  138m  11m 2068 S  7.0  1.0   0:06.38 cupsd
16532 root      25   0 97024 6728 1760 S  3.0  0.6   0:03.82 perl
 1859 dbus      15   0 21532 1180  724 S  1.4  0.1   0:06.67 dbus-daemon
 8900 root      15   0 90108 3348 2616 S  1.4  0.3   0:05.89 sshd
  324 root      10  -5     0    0    0 S  1.2  0.0   0:27.61 kjournald
 2365 root      15   0  100m  12m 6536 S  0.4  1.1   0:13.04 Xorg
15499 root      15   0     0    0    0 S  0.4  0.0   0:00.45 pdflush
 1976 root      18   0 10228  792  692 S  0.2  0.1   0:45.45 hald-addon-stor
 2362 root      18   0  185m 4120 3392 S  0.2  0.4   0:02.05 gdm-rh-security
 2498 root      15   0  356m  18m  11m S  0.2  1.7   0:05.92 nautilus
    1 root      15   0 10348  692  580 S  0.0  0.1   0:00.63 init
  • iostat
avg-cpu:  %user   %nice %system %iowait  %steal   %idle
          34.34    0.00   52.53   13.13    0.00    0.00

Device:         rrqm/s   wrqm/s   r/s   w/s   rsec/s   wsec/s avgrq-sz avgqu-sz   await  svctm  %util
sda               0.00 24325.25  0.00 138.38     0.00 108339.39   782.89    69.44  171.74   4.59  63.54
sda1              0.00     0.00  0.00  0.00     0.00     0.00     0.00     0.00    0.00   0.00   0.00
sda2              0.00 24325.25  0.00 138.38     0.00 108339.39   782.89    69.44  171.74   4.59  63.54
dm-0              0.00     0.00  0.00 24556.57     0.00 196452.53     8.00  8072.20  108.44   0.03  63.64
dm-1              0.00     0.00  0.00  0.00     0.00     0.00     0.00     0.00    0.00   0.00   0.00
hda               0.00     0.00  0.00  0.00     0.00     0.00     0.00     0.00    0.00   0.00   0.00
  • vmstat
procs -----------memory---------- ---swap-- -----io---- --system-- -----cpu------
 r  b   swpd   free   buff  cache   si   so    bi    bo   in   cs us sy id wa st
 1  0     72  11272   8684 898584    0    0   202   357  285  136  1  1 96  1  0
 1  0     72  10820   8692 898496    0    0     2 17653 1042 4070 43 52  0  5  0
 1  0     72   7904   8824 899992    0    0    58 10933 1038 4132 44 52  0  4  0
 2  1     72   8096   8836 900524    0    0     5 20454 1034 3900 41 49  0 10  0
 2  2     72   8720   8948 885980    0    0     4 17076 1029 3651 56 44  0  0  0
 2  2     72  10972   8964 884328    0    0     2 15692 1028 3456 54 41  0  5  0
 0  1     72  12784   8948 882688    0    0     0 16674 1031 3560 56 42  0  1  0
 1  1     72   9932   8940 880176    0    0     2 15288 1032 3287 50 39  0 11  0
 1  1     72   8100   8972 883240    0    0     1 14493 1026 3327 53 40  0  7  0
 1  1     72  10464   8972 880276    0    0    14 16070 1029 3433 55 42  0  3  0
 1  1     72   9552   8960 883000    0    0     5  8726 1129 1788 29 28  0 44  0
 1  1     72   8956   8956 880716    0    0     2 11648 1031 2597 50 46  0  4  0
  • killall eggcups
  • rpm -e desktop-printing

/dev/null printer

[root@localhost ~]# cat /etc/cups/printers.conf
# Printer configuration file for CUPS v1.3.7
# Written by cupsd on 2010-08-18 23:29
Printer dummy
Info dummy
Location dummy
#DeviceURI lpd://localhost/dummy
DeviceURI file:/dev/null
State Stopped
StateMessage /usr/lib/cups/backend/lpd failed
StateTime 1282188598
Accepting Yes
Shared Yes
JobSheets none none
QuotaPeriod 0
PageLimit 0
KLimit 0
OpPolicy default
ErrorPolicy retry-job
/Printer
[root@localhost perl-examples]# lpc status dummy
dummy:
	printer is on device '/dev/null' speed -1
	queuing is enabled
	printing is disabled
	4037 entries
	daemon present
  • iostat
avg-cpu:  %user   %nice %system %iowait  %steal   %idle
          19.82    0.10   46.55   33.53    0.00    0.00

Device:         rrqm/s   wrqm/s   r/s   w/s   rsec/s   wsec/s avgrq-sz avgqu-sz   await  svctm  %util
sda               0.00  4801.50  0.10 46.85     0.80 39369.77   838.62    60.27 1322.19  16.91  79.40
sda1              0.00     0.00  0.00  0.00     0.00     0.00     0.00     0.00    0.00   0.00   0.00
sda2              0.00  4801.50  0.10 46.85     0.80 39369.77   838.62    60.27 1322.19  16.91  79.40
dm-0              0.00     0.00  0.10 4847.65     0.80 38781.18     8.00  6865.47 1454.68   0.16  79.38
dm-1              0.00     0.00  0.00  0.00     0.00     0.00     0.00     0.00    0.00   0.00   0.00
hda               0.00     0.00  0.00  0.00     0.00     0.00     0.00     0.00    0.00   0.00   0.00
  • vmstat
procs -----------memory---------- ---swap-- -----io---- --system-- -----cpu------
 r  b   swpd   free   buff  cache   si   so    bi    bo   in   cs us sy id wa st
 1  1     72   8448  16744 907308    0    0   191   728  361  243  2  3 93  2  0
 3  1     72   8060  16952 908260    0    0     0 22698 1087 2319 11 30  0 60  0
 3  0     72   8096  17536 905444    0    0     0 12062 1056 6336 30 67  0  4  0
 3  1     72   9876  17956 905004    0    0     1 27556 1042 4583 23 54  0 23  0
 0  1     72   9324  18324 905000    0    0    14 21512 1038 4415 21 50  0 29  0
 2  0     72   7940  18264 907176    0    0     0 20510 1029 3630 17 43  0 40  0
 1  1     72   8696  18148 905116    0    0     1 16945 1049 5565 30 59  0 11  0
  • top
top - 11:45:06 up 18:14,  5 users,  load average: 3.28, 2.32, 1.80
Tasks: 122 total,   3 running, 119 sleeping,   0 stopped,   0 zombie
Cpu(s): 12.4%us, 27.1%sy,  0.0%ni,  0.0%id, 57.3%wa,  0.2%hi,  3.0%si,  0.0%st
Mem:   1153596k total,  1145388k used,     8208k free,    17492k buffers
Swap:  2326520k total,       72k used,  2326448k free,   902192k cached

  PID USER      PR  NI  VIRT  RES  SHR S %CPU %MEM    TIME+  COMMAND
 3721 root      15   0  135m 8612 2064 S  3.8  0.7   0:07.65 cupsd
 3705 root      24   0 97024 6732 1760 R  1.8  0.6   0:05.73 perl
  324 root      10  -5     0    0    0 D  1.2  0.0   0:52.23 kjournald
 1859 dbus      15   0 21532 1296  812 S  0.6  0.1   0:26.48 dbus-daemon
  451 root      15   0     0    0    0 D  0.4  0.0   0:00.98 pdflush
[root@localhost ~]# strace -p 3721
Process 3721 attached - interrupt to quit
epoll_wait(1,

...

poll([{fd=11, events=POLLIN}], 1, 10000) = 1 ([{fd=11, revents=POLLIN}])
recvfrom(11, $O9TV?%2s\EOLBNRCk/tGyEzqxBX2:^..., 32768, 0, NULL, NULL) = 16000
write(12, $O9TV?%2s\EOLBNRCk/tGyEzqxBX2:^..., 16000) = 16000
epoll_wait(1, {{EPOLLIN, {u32=3753320512, u64=47308523118656}}}, 1024, 1000) = 1
recvfrom(11, R, 1, MSG_PEEK, NULL, NULL) = 1
poll([{fd=11, events=POLLIN}], 1, 10000) = 1 ([{fd=11, revents=POLLIN}])
recvfrom(11, R=9ydwDiI'+k*xA'6!0_au@.jE6xm}?h..., 32768, 0, NULL, NULL) = 16000
write(12, R=9ydwDiI'+k*xA'6!0_au@.jE6xm}?h..., 16000) = 16000
epoll_wait(1, {{EPOLLIN, {u32=3753320512, u64=47308523118656}}}, 1024, 1000) = 1
recvfrom(11, [, 1, MSG_PEEK, NULL, NULL) = 1
poll([{fd=11, events=POLLIN}], 1, 10000) = 1 ([{fd=11, revents=POLLIN}])
recvfrom(11, [u@Z'++0pu+H*4,5Z@HAu:[(V4`P~d..., 32768, 0, NULL, NULL) = 768
write(12, [u@Z'++0pu+H*4,5Z@HAu:[(V4`P~d..., 768) = 768
epoll_wait(1, {{EPOLLIN, {u32=3753320512, u64=47308523118656}}}, 1024, 1000) = 1
recvfrom(11, =, 1, MSG_PEEK, NULL, NULL) = 1
poll([{fd=11, events=POLLIN}], 1, 10000) = 1 ([{fd=11, revents=POLLIN}])
recvfrom(11, ==q\0qP#h7k4G~%#]`UaoR'LXK#ud'|..., 32768, 0, NULL, NULL) = 16000
write(12, ==q\0qP#h7k4G~%#]`UaoR'LXK#ud'|..., 16000) = 16000
epoll_wait(1, {{EPOLLIN, {u32=3753320512, u64=47308523118656}}}, 1024, 1000) = 1
recvfrom(11, l, 1, MSG_PEEK, NULL, NULL) = 1
poll([{fd=11, events=POLLIN}], 1, 10000) = 1 ([{fd=11, revents=POLLIN}])
recvfrom(11, lfYpobZ]Ue7dCfn;G4oiaI+\9'6ef`j..., 32768, 0, NULL, NULL) = 16000
write(12, lfYpobZ]Ue7dCfn;G4oiaI+\9'6ef`j..., 16000) = 16000
epoll_wait(1, {{EPOLLIN, {u32=3753320512, u64=47308523118656}}}, 1024, 1000) = 1
recvfrom(11, B, 1, MSG_PEEK, NULL, NULL) = 1
poll([{fd=11, events=POLLIN}], 1, 10000) = 1 ([{fd=11, revents=POLLIN}])
recvfrom(11, B@K)[3G]s%Ok$|+_v6Sd.IdVRa3FR!..., 32768, 0, NULL, NULL) = 768
write(12, B@K)[3G]s%Ok$|+_v6Sd.IdVRa3FR!..., 768) = 768
epoll_wait(1, {{EPOLLIN, {u32=3753320512, u64=47308523118656}}}, 1024, 1000) = 1
recvfrom(11, ?, 1, MSG_PEEK, NULL, NULL) = 1
poll([{fd=11, events=POLLIN}], 1, 10000) = 1 ([{fd=11, revents=POLLIN}])
recvfrom(11, ?%d`6R*V9YFF?\n~\\YuE(N8wMlt!-*tOn..., 32768, 0, NULL, NULL) = 16000
write(12, ?%d`6R*V9YFF?\n~\\YuE(N8wMlt!-*tOn..., 16000) = 16000
epoll_wait(1, {{EPOLLIN, {u32=3753320512, u64=47308523118656}}}, 1024, 1000) = 1
recvfrom(11, E, 1, MSG_PEEK, NULL, NULL) = 1
[root@localhost ~]# strace -e open -p 8696
Process 8696 attached - interrupt to quit
open(/var/spool/cups/000004e9, O_WRONLY|O_CREAT|O_TRUNC, 0640) = 12
open(/var/spool/cups/000004e9, O_RDONLY) = 12
open(/var/spool/cups/c41103, O_WRONLY|O_CREAT|O_TRUNC, 0666) = 12
open(/var/spool/cups/000004ea, O_WRONLY|O_CREAT|O_TRUNC, 0640) = 12
open(/var/spool/cups/000004ea, O_RDONLY) = 12
open(/var/spool/cups/c41104, O_WRONLY|O_CREAT|O_TRUNC, 0666) = 12
open(/var/spool/cups/000004eb, O_WRONLY|O_CREAT|O_TRUNC, 0640) = 12
open(/var/spool/cups/000004eb, O_RDONLY) = 12
open(/var/spool/cups/c41105, O_WRONLY|O_CREAT|O_TRUNC, 0666) = 12
open(/var/spool/cups/000004ec, O_WRONLY|O_CREAT|O_TRUNC, 0640) = 12
open(/var/spool/cups/000004ec, O_RDONLY) = 12

/var/spool/cups

  • Even with a /dev/null printer, cups still puts a copy of the pages in /var/spool/cups
[root@localhost perl-examples]# ls /var/spool/cups
c42999  c43013  c43027  c43041  c43055  c43069      d43011-001  d43025-001  d43039-001  d43053-001  d43067-001
c43000  c43014  c43028  c43042  c43056  c43070      d43012-001  d43026-001  d43040-001  d43054-001  d43068-001
c43001  c43015  c43029  c43043  c43057  d42999-001  d43013-001  d43027-001  d43041-001  d43055-001  d43069-001
c43002  c43016  c43030  c43044  c43058  d43000-001  d43014-001  d43028-001  d43042-001  d43056-001  d43070-001
c43003  c43017  c43031  c43045  c43059  d43001-001  d43015-001  d43029-001  d43043-001  d43057-001  tmp
c43004  c43018  c43032  c43046  c43060  d43002-001  d43016-001  d43030-001  d43044-001  d43058-001
c43005  c43019  c43033  c43047  c43061  d43003-001  d43017-001  d43031-001  d43045-001  d43059-001
c43006  c43020  c43034  c43048  c43062  d43004-001  d43018-001  d43032-001  d43046-001  d43060-001
c43007  c43021  c43035  c43049  c43063  d43005-001  d43019-001  d43033-001  d43047-001  d43061-001
c43008  c43022  c43036  c43050  c43064  d43006-001  d43020-001  d43034-001  d43048-001  d43062-001
c43009  c43023  c43037  c43051  c43065  d43007-001  d43021-001  d43035-001  d43049-001  d43063-001
c43010  c43024  c43038  c43052  c43066  d43008-001  d43022-001  d43036-001  d43050-001  d43064-001
c43011  c43025  c43039  c43053  c43067  d43009-001  d43023-001  d43037-001  d43051-001  d43065-001
c43012  c43026  c43040  c43054  c43068  d43010-001  d43024-001  d43038-001  d43052-001  d43066-001
[root@localhost ~]# head -n2 /var/spool/cups/d43002-001
NPZ|Urb*^FJPGWTQJUq=E?0tU4eFtaBl/,)`sy/?N.]jPM\yx%Fi;sXaL,2Ag5bYvGk,}EN,*:6.lsD{x)*`lu?46!_C*OP|4[BlY8]1uO9Jyg;ll\fI3p7'[C'uU,ITIU`hru8aED%E]T/H5b=m!:i,!%z,[wOjG/o1BP?;kxgKp^W\koCMw1#FBVWG#q4Y6pi+EPLN}14gGiu{CI@~R;5{#xI;+R?;k51s76+ul
ps-@oahAr*zG'0M]$]f?1vQY!0]7*|MTTZ@.F-0|N32jJn5rIbnzGLMT)Pqpb8*;d$]}m.z@E%[i]\92eLvI4:$6RvB[?@fsBuSX?4?[}rP~eFh`.d:)mQy9o-agry.a9rJ=z5h/G^;,4YbBOs`i~Q9fB)33+8QkB!?p@AZG@@,e#?y%U5-3O2uF$|e]`VMnz{ofo6t(3Rdv8Wk'Y6#yTxTYT[4[5)Mwas0T8ICk_x'e
  • Turn off job history and preserve files
[root@localhost ~]# fgrep Preserve /etc/cups/cupsd.conf
PreserveJobFiles No #Specifies whether or not to preserve job files after they are printed.
PreserveJobHistory No #Specifies whether or not to preserve the job history after they are printed.

Make sure the printer is enabled: cupsenable dummy

  • Measure performance stats again
avg-cpu:  %user   %nice %system %iowait  %steal   %idle
          41.92    0.00   58.08    0.00    0.00    0.00

Device:         rrqm/s   wrqm/s   r/s   w/s   rsec/s   wsec/s avgrq-sz avgqu-sz   await  svctm  %util
sda               3.19  2232.93  5.19 21.56  1160.88 18035.93   717.73    10.15  379.65  13.00  34.77
sda1              0.00     0.00  0.00  0.00     0.00     0.00     0.00     0.00    0.00   0.00   0.00
sda2              3.19  2232.93  5.19 21.56  1160.88 18035.93   717.73    10.15  379.65  13.00  34.77
dm-0              0.00     0.00  8.38 2254.49  1160.88 18035.93     8.48  1088.87  481.19   0.15  34.67
dm-1              0.00     0.00  0.00  0.00     0.00     0.00     0.00     0.00    0.00   0.00   0.00
hda               0.00     0.00  0.00  0.00     0.00     0.00     0.00     0.00    0.00   0.00   0.00

avg-cpu:  %user   %nice %system %iowait  %steal   %idle
          44.11    0.20   55.69    0.00    0.00    0.00

Device:         rrqm/s   wrqm/s   r/s   w/s   rsec/s   wsec/s avgrq-sz avgqu-sz   await  svctm  %util
sda               6.59   949.90  7.19 11.58  1572.85  7691.82   493.79     2.19  116.52   6.34  11.90
sda1              0.00     0.00  0.00  0.00     0.00     0.00     0.00     0.00    0.00   0.00   0.00
sda2              6.59   949.90  7.19 11.58  1572.85  7691.82   493.79     2.19  116.52   6.34  11.90
dm-0              0.00     0.00 13.77 961.48  1572.85  7691.82     9.50   200.04  205.12   0.12  11.76
dm-1              0.00     0.00  0.00  0.00     0.00     0.00     0.00     0.00    0.00   0.00   0.00
hda               0.00     0.00  0.00  0.00     0.00     0.00     0.00     0.00    0.00   0.00   0.00
[root@localhost ~]# vmstat 5 1000
procs -----------memory---------- ---swap-- -----io---- --system-- -----cpu------
 r  b   swpd   free   buff  cache   si   so    bi    bo   in   cs us sy id wa st
 4  0     72 391784  24168 536260    0    0   199   876  385  278  3  4 92  2  0
 2  0     72 371308  24256 556168    0    0     0  6593 1033 1093 32 68  0  0  0
 3  0     72 357616  24328 570084    0    0     0  4862 1020  694 33 67  0  0  0
 4  0     72 338148  24420 587944    0    0     0  2248 1013  676 39 61  0  0  0
12  0     72 322756  24492 602856    0    0     0  4443 1183  733 27 73  0  0  0
 3  1     72 316044  24540 609808    0    0     0  4242 1010  417 36 64  0  0  0
 3  0     72 303924  24604 621752    0    0     0  1851 1011  686 44 56  0  0  0
 4  0     72 278320  24724 645332    0    0     0  2646 1016 1162 45 55  0  0  0
 2  0     72 250536  24868 672368    0    0     0  6120 1048  933 45 54  0  1  0
 4  0     72 222460  24984 700208    0    0     0  7875 1014  948 43 57  0  0  0
 6  1     72  11456  11584 909444    0    0   397 17951 1023 2658 37 56  0  8  0
 3  1     72   7976  11624 915744    0    0   387  3207 1015  575 40 60  0  0  0
 2  1     72   9360  10752 915124    0    0   686  4374 1026 1255 44 56  0  0  0
 2  1     72   8556  10316 916424    0    0   346  3910 1042  767 39 60  0  0  0
 2  2     72   7936   9588 916768    0    0   552  4372 1012  889 44 56  0  0  0
 1  2     72   8160   9024 916208    0    0   602  5343 1020 1168 43 57  0  0  0
 3  3     72   7916   9072 912656    0    0   590 10331 1030 2223 39 59  0  2  0
 1  2     72   8388   9148 917696    0    0   442  6034 1024 1162 43 57  0  0  0
 4  2     72   7516   9144 916292    0    0   745  4174 1021  931 44 56  0  0  0
 3  1     72   8024   9184 915628    0    0   794  5653 1022 1167 41 56  0  3  0
 3  1     72   7920   9208 917688    0    0   408   282 1054  903 36 64  0  0  0
 4  0     72   8452   9256 917500    0    0   586  4257 1019  685 41 59  0  0  0
 2  1     72   7912   9316 915008    0    0   328  2465 1023 2951 20 73  0  7  0
 8  0     72  10720   9372 913700    0    0   470 17729 1021 1902 38 61  0  1  0
 9  0     72   7860   9420 917864    0    0   595  5662 1018  719 36 64  0  0  0
 6  0     72   8304   9420 916952    0    0   397  2550 1014  646 35 65  0  0  0
 9  0     72   8832   9440 916440    0    0   595  4798 1059 1315 30 70  0  0  0
 8  0     72   8432   9464 916424    0    0   397  4923 1121  481 34 65  0  1  0
 3  0     72  10508   9512 914256    0    0   794  4658 1022  851 34 66  0  0  0
top - 12:23:15 up 18:52,  5 users,  load average: 4.58, 2.91, 1.74
Tasks: 122 total,   1 running, 121 sleeping,   0 stopped,   0 zombie
Cpu(s): 30.8%us, 55.2%sy,  0.0%ni,  0.0%id, 12.6%wa,  0.2%hi,  1.2%si,  0.0%st
Mem:   1153596k total,  1145100k used,     8496k free,    19168k buffers
Swap:  2326520k total,       72k used,  2326448k free,   905592k cached

  PID USER      PR  NI  VIRT  RES  SHR S %CPU %MEM    TIME+  COMMAND
21963 root      15   0  132m 5512 2084 D 29.8  0.5   2:22.68 cupsd
21901 root      25   0 97024 6732 1760 S  3.0  0.6   0:04.98 perl
 1976 root      18   0 10228  792  692 S  1.0  0.1   0:55.91 hald-addon-stor
 1859 dbus      15   0 21532 1296  812 S  0.8  0.1   0:34.45 dbus-daemon

Que overload

  • As the que grows, cupsd continues to read in the print job and also starts writing the control/job files.
procs -----------memory---------- ---swap-- -----io---- --system-- -----cpu------
 r  b   swpd   free   buff  cache   si   so    bi    bo   in   cs us sy id wa st
 6  0     72 101392  71808 760536    0    0     0  4480 1010  840 34 66  0  0  0
 3  0     72  92248  71852 768908    0    0     0  3671 1007  507 31 69  0  0  0
 3  0     72  62868  72000 796248    0    0     0  2260 1014  707 35 65  0  0  0
 4  0     72  36108  72120 823072    0    0     0  9830 1036  945 41 59  0  0  0
 3  0     72  16888  72200 841948    0    0     0  4880 1008 1076 40 60  0  0  0
 4  0     72   9188  71744 849872    0    0     0  4875 1009  488 33 67  0  0  0
 3  0     72   8640  69680 851900    0    0    19  4875 1013  946 34 66  0  0  0
 4  0     72   8076  69724 852208    0    0     0  3271 1007  532 33 67  0  0  0
 1  2     72   8480  58680 859944    0    0   197  7745 1004 1628 32 68  0  0  0
 2  1     72   8672  58756 858828    0    0   395  6499 1011 1022 37 62  0  0  0
 4  1     72  12728  58992 855068    0    0   399 12618 1070 2746 35 65  0  0  0
 3  0     72   9804  59072 859940    0    0   394   200 1008  854 42 58  0  0  0
 4  0     72   8532  59140 861196    0    0   584  3874 1017 1069 44 56  0  0  0
 6  1     72   7864  59256 860392    0    0   774  4474 1022  934 45 55  0  0  0
 4  0     72   8128  59340 861128    0    0   956  7230 1027  938 44 56  0  0  0
  • Cups loads a filter
[root@localhost ~]# ps aux | fgrep cups
root     13261 56.1  0.5 136688  6248 ?        Ss   14:21   3:58 cupsd
lp       18496  0.0  0.1  65964  1172 ?        S    14:28   0:00 /bin/bash /usr/lib/cups/filter/textonly 62723 root data1.txt 1 document-name=data1.txt media=Letter finishings=3 number-up=1 job-uuid=urn:uuid:6aeb5e42-ed83-361b-64dd-703998882714 /var/spool/cups/d62723-001
lp       18501  0.0  0.0  59036   716 ?        R    14:28   0:00 sed -e s/$/?/g /var/spool/cups/d62723-001
root     18503  0.0  0.0  61164   584 pts/4    R+   14:28   0:00 fgrep cups
  • As jobs stop being sent, reads go down and writes slowly decrease as the que is processed
procs -----------memory---------- ---swap-- -----io---- --system-- -----cpu------
 r  b   swpd   free   buff  cache   si   so    bi    bo   in   cs us sy id wa st
 4  1     72  17888  61184 858544    0    0  3334   108 1043  187 41 53  0  6  0
 2  1     72  16920  61196 859536    0    0   786    26 1012   84 32 65  0  3  0
 6  1     72  18012  61208 858508    0    0  2555    48 1048  150 38 56  0  6  0
 3  0     72  17016  61236 859436    0    0   984   184 1019  105 33 60  0  7  0
 2  0     72  17732  61248 858604    0    0  3586    51 1038  185 40 57  0  2  0
 3  0     72  18068  61260 858296    0    0  2109    41 1024  123 32 58  0 10  0
 3  0     72  18512  61272 858044    0    0  4686    59 1052  230 45 46  0  9  0
 1  1     72  18024  61284 858204    0    0  4581    60 1041  225 49 46  0  6  0
 1  0     72  18868  61296 857488    0    0  4998    65 1054  247 51 43  0  6  0
 1  0     72  18484  61308 857788    0    0  4801    58 1054  242 53 40  0  7  0
 2  0     72  19256  61320 857016    0    0  4983   145 1069  245 53 41  0  6  0
 1  0     72  18732  61332 857536    0    0  4650    62 1053  242 48 39  0 12  0
 1  0     72  36228  61344 840436    0    0  2319    65 1029  256 53 41  0  6  0
 2  0     72  73696  61356 803760    0    0     0    76 1005  285 64 35  0  1  0
 1  0     72 105932  61368 772000    0    0     0    68 1006  266 63 36  0  1  0
 2  0     72 135332  61388 743248    0    0     0    87 1004  259 63 37  0  0  0
 1  0     72 165336  61400 713488    0    0     0    70 1003  255 63 37  0  0  0
 3  0     72 194736  61412 684716    0    0     0   170 1027  246 63 37  0  0  0
 2  0     72 224352  61424 654968    0    0     0    64 1004  249 63 37  0  0  0
  • Overloaded conditions

Debugging cups

Results

  • Revision 15
  • 1GB ram
  • 1 x Intel(R) Core(TM)2 Duo CPU P8600 @ 2.40GHz
  • printfile and /var/spool/cups on virtual scsi disk
  • loglevel info
file size: 1011712b
file create: 1s
file copy:   0s
time to que 15000 jobs: 4973s
time to process 15000 jobs: 4761s
  • Revision 19
  • 1GB ram
  • 1 x Intel(R) Core(TM)2 Duo CPU P8600 @ 2.40GHz
  • printfile on ramdisk
  • /var/spool/cups on local filesystem
  • 1k jobs
  • loglevel info
file size: 962560b
file create: 1s
file copy:   0s
time to que 1000 jobs: 204s
time to process 1000 jobs: 110s
  • Revision 21
  • 1GB ram
  • 1 x Intel(R) Core(TM)2 Duo CPU P8600 @ 2.40GHz
  • printfile on ramdisk
  • /var/spool/cups on local filesystem
  • 1k jobs
  • loglevel info
file size: 1011712b
file create: 4s
file copy:   0s
time to que 1000 jobs: 197s
time to process 1000 jobs: 115s
  • Revision 23
  • 1GB ram
  • 1 x Intel(R) Core(TM)2 Duo CPU P8600 @ 2.40GHz
  • printfile on ramdisk
  • /var/spool/cups on local filesystem
  • 15k jobs
  • loglevel info
file size: 1011712b
file create: 4s
file copy:   0s
time to que 15000 jobs: 4725s
time to process 15000 jobs: 4581s
  • Revision 25
  • 1GB ram
  • 1 x Intel(R) Core(TM)2 Duo CPU P8600 @ 2.40GHz
  • printfile on ramdisk
  • /var/spool/cups on 564MB tmpfs
  • 15k jobs
  • loglevel info
file size: 1011712b
file create: 1s
file copy:   0s
time to que 1000 jobs: 189s
time to process 1000 jobs: 103s
  • Revision 27
  • 1GB ram
  • 1 x Intel(R) Core(TM)2 Duo CPU P8600 @ 2.40GHz
  • printfile on ramdisk
  • /var/spool/cups on 564MB tmpfs
  • 15k jobs
  • loglevel info
file size: 1011712b
file create: 1s
file copy:   0s
time to que 15000 jobs: 4620s
time to process 15000 jobs: 76s
  • Revision 27
  • 1GB ram
  • 1 x Intel(R) Core(TM)2 Duo CPU P8600 @ 2.40GHz
  • printfile on ramdisk
  • /var/spool/cups on 564MB tmpfs
  • 15k jobs
  • loglevel none
file size: 1011712b
file create: 1s
file copy:   0s
time to que 15000 jobs: 4618s
time to process 15000 jobs: 86s
  • Revision 27
  • 1.8 GB ram
  • 1 x Intel(R) Core(TM)2 Duo CPU P8600 @ 2.40GHz
  • printfile on ramdisk
  • /var/spool/cups on 564MB tmpfs
  • 15k jobs
  • loglevel none
file size: 1011712b
file create: 0s
file copy:   0s
time to que 15000 jobs: 4005s
time to process 15000 jobs: 0s
  • Revision 27
  • 978MB ram
  • 1 x QEMU Virtual CPU version 0.11.0 @ 3200.396 Mhz {AMD Phenom XII}
  • printfile on ramdisk
  • /var/spool/cups on 564MB tmpfs
  • 15k jobs
  • loglevel none
file size: 1011712b
file create: 0s
file copy:   0s
time to que 15000 jobs: 171s
time to process 15000 jobs: 4s
  • Revision 27
  • 978MB ram
  • 1 x QEMU Virtual CPU version 0.11.0 @ 3200.396 Mhz {AMD Phenom XII}
  • printfile on ramdisk
  • /var/spool/cups on 490MB tmpfs
  • 15k jobs
  • loglevel info
file size: 1011712b
file create: 0s
file copy:   0s
time to que 15000 jobs: 172s
time to process 15000 jobs: 0s
  • Revision 27
  • 978MB ram
  • 2 x QEMU Virtual CPU version 0.11.0 @ 3200.396 Mhz {AMD Phenom XII}
  • printfile on ramdisk
  • /var/spool/cups on 490MB tmpfs
  • 15k jobs
  • loglevel info
file size: 1011712b
file create: 1s
file copy:   0s
time to que 15000 jobs: 212s
time to process 15000 jobs: 0s
  • Revision 27
  • 1962MB ram
  • 2 x QEMU Virtual CPU version 0.11.0 @ 3200.396 Mhz {AMD Phenom XII}
  • printfile on ramdisk
  • /var/spool/cups on 982MB tmpfs
  • 15k jobs
  • loglevel info
file size: 1011712b
file create: 0s
file copy:   0s
time to que 15000 jobs: 231s
time to process 15000 jobs: 0s

Results 2

  • Revision 35
  • 1962MB ram
  • 2 x QEMU Virtual CPU version 0.11.0 @ 3200.396 Mhz {AMD Phenom XII}
  • 11 sample pdf files
  • printfiles on local ext3 fs
  • /var/spool/cups local ext3 fs
  • 201k jobs
  • loglevel info
Total printers: 280
Time to process 201600 jobs: 3299s
  • Revision 36
  • 1962MB ram
  • 2 x QEMU Virtual CPU version 0.11.0 @ 3200.396 Mhz {AMD Phenom XII}
  • 11 sample pdf files
  • printfiles on ramdisk
  • /var/spool/cups local ext3 fs
  • 201k jobs
  • loglevel info
Total printers: 280
Time to process 201600 jobs: 3049s