FPGARelated.com
Forums

Strange behaviour with Xilkernel

Started by Unknown July 21, 2008
Hi all,

I'm doing some performance tests with multi-threaded xilkernel
applications and I always get erroneous times in programs where
context switches occurs continuously. I have written a trivial
application that shows the problem:

Two threads running without communication between them. Each one
simply iterates N times and in each loop makes a yield() call. This
ensures one context switch per loop and thread (both have the same
priority and the policy is SCHED_PRIO). Now, each M iterations, I
track the elapsed kernel ticks and print them on screen. Both
xilkernel and the main application are optimized (-O2) and each
xilkernel tick is set to 10 milliseconds.

thread body
===========

#define N    1000000
#define M    1000

for (i=0; i<N; i++) {

  yield();

  if (!(i % M)) {
    time = xget_clock_ticks();
    xil_printf("Elapsed time: %d msec\n", time * 10);
}

===========

Results indicate that the times printed by the application are always
lesser than the times elapsed in reality (you can measure the real
elapsed time, for example, with the Windows' clock or with a
timekeeper). I mean if the program really takes about 50 seconds to
finish, I get from application about 24 seconds. This is impossible.

But, if for example, I change yield() for sleep(1), results now
corresponds with the real elapsed time. Moreover, if I do yield() each
X iterations (testing progressively with a bigger value of X: 10, 100,
1000...1000000), the results printed on screen fit more and more to
the real elapsed times.

Can anyone explain me what is happening?, Is it a xilkernel bug?

Many thanks for your help.
Paco