Dstat 0.6.8 release

Submitted by dag on Fri, 2008/09/12 - 00:02

I just released a new Dstat. I finally spend some time doing the boring release-dance:

  • Verifying all changes since 0.6.7
  • Backporting changes to python 1.5 version
  • Creating the release archive without all pending patches and experimental stuff
  • Verifying ChangeLog and documentation
  • Testing on all Red Hat and CentOS/RHEL versions

I hate to do all this after-hours and that is one of the reasons I always release later than I probably should. If only I could outsource that part. I bet Open Source would be in a better shape if those less interesting tasks could be delegated :-)

Another option is to find someone who would pay me for doing the boring parts, at least then I could spend more time doing the Open Source projects I like.

So what is in this release ?
Under the hood there is a much more accurate scheduler.

Also a few new plugins (snooze, net_packets) have been added.

The --debug option now also affects the time plugin.

A few outstanding bugs have been fixed, and likely some new added ;-)

And most importantly, Dstat will now indicate when time is not linear. I hear you think... Yes, on virtual machines time may be not so linear as you would expect and since it may affect some of the plugins (especially the ones that are time-related) you ought to know when that happens.

I'd like to know if this change works in all cases as I haven't tested it thoroughly on dynamic-tick kernels.

A very interested command to run in a VM host is:

dstat -M snooze -ti -I0 --debug

just too see how many ticks you get per second and how many you may be missing. And compare that to the scheduler intervals.

I hope I can get sched_setscheduler working in python without requiring compilation. It may improve Dstat's accuracy even more during heavy load.

dstat oddity

When using 0.6.8 on a xen host on our HP C Class blades, i see the following from dstat -f:
100 0 0 0 0 0: 0 0 95 5 0 0: 0 0 97 3 0 0: 0 0 99 1 0 0: 0 0 99 1 0 0: 0 0 100 0 0 0: 0 0 99 1 0 0: 0 0 99 1 0 0| 0 0 | 611k 654k| 0 0 |1462 30
Error: tick problem detected, this should never happen !
Error: tick problem detected, this should never happen !
Error: tick problem detected, this should never happen !
Error: tick problem detected, this should never happen !
Error: tick problem detected, this should never happen !
Error: tick problem detected, this should never happen !
Error: tick problem detected, this should never happen !
Error: tick problem detected, this should never happen !
Error: tick problem detected, this should never happen !
Error: tick problem detected, this should never happen !
Error: tick problem detected, this should never happen !
Error: tick problem detected, this should never happen !
Error: tick problem detected, this should never happen !
Error: tick problem detected, this should never happen !
Error: tick problem detected, this should never happen !
Error: tick problem detected, this should never happen !
Error: tick problem detected, this should never happen !
Error: tick problem detected, this should never happen !
Error: tick problem detected, this should never happen !
Error: tick problem detected, this should never happen !
Error: tick problem detected, this should never happen !
Error: tick problem detected, this should never happen !
Error: tick problem detected, this should never happen !
Error: tick problem detected, this should never happen !
Error: tick problem detected, this should never happen !
Error: tick problem detected, this should never happen !
Error: tick problem detected, this should never happen !
Error: tick problem detected, this should never happen !
Error: tick problem detected, this should never happen !
Error: tick problem detected, this should never happen !
Error: tick problem detected, this should never happen !
Error: tick problem detected, this should never happen !
Error: tick problem detected, this should never happen !
Error: tick problem detected, this should never happen !
Error: tick problem detected, this should never happen !
Error: tick problem detected, this should never happen !
Error: tick problem detected, this should never happen !
Error: tick problem detected, this should never happen !
Error: tick problem detected, this should never happen !
Error: tick problem detected, this should never happen !
Error: tick problem detected, this should never happen !
Error: tick problem detected, this should never happen !
100 0 0 0 0 0: 0 0 95 5 0 0: 0 0 97 3 0 0: 0 0 99 1 0 0: 0 0 99 1 0 0: 0 0 100 0 0 0: 0 0 99 1 0 0: 0 0 99 1 0 0| 0 407k| 491k 540k| 0 0 |1328 36

Some relevant system info:
[root@enclosure103-blade01 ~]# uname -a
Linux enclosure103-blade01.mtc.ibsys.com 2.6.18-53.1.13.el5.xs4.1.0.254.273xen #1 SMP Tue Mar 4 17:55:32 EST 2008 i686 i686 i386 GNU/Linux
[root@enclosure103-blade01 ~]# cat /proc/cpuinfo
processor : 0
vendor_id : GenuineIntel
cpu family : 6
model : 15
model name : Intel(R) Xeon(R) CPU E5335 @ 2.00GHz
stepping : 7
cpu MHz : 2000.070
cache size : 4096 KB
fdiv_bug : no
hlt_bug : no
f00f_bug : no
coma_bug : no
fpu : yes
fpu_exception : yes
cpuid level : 10
wp : yes
flags : fpu tsc msr pae mce cx8 apic mtrr mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe lm constant_tsc up pni monitor ds_cpl vmx tm2 cx16 xtpr lahf_lm
bogomips : 4002.20

[root@enclosure103-blade01 ~]# cat /etc/redhat-release
XenServer release 4.1.0-7843p (xenenterprise)

Now, the blade actually has 2 quad core processors, but he xen dom0 only sees one of them (obviously)

Obviously a bug :-/

We need to look at this. I was fearing problems on systems with eg. a dynamic number of ticks, but yours does not fall in that category. In my own tests with Fedora 9 and Fedora 9 in VirtualBox, or in ESX 2.5 and ESX 3 guests (also with non-dynamic tick kernels) I was unable to trigger the problem.

Is this reproducable in a CentOS Xen guest ? If so, I can debug what happens in this particular instance.

Thanks for reporting. I need to fix this before it ends up in distribution repositories :-/

Dunno, i'll roll one up on

Dunno, i'll roll one up on monday and try it =P