From 990bc3f015a4f8fce2eb918375defcd44980a845 Mon Sep 17 00:00:00 2001 From: marha Date: Fri, 8 Jun 2012 09:33:13 +0200 Subject: Used synchronise script to update files --- xorg-server/doc/.gitignore | 4 + xorg-server/doc/Xserver-spec.xml | 35 ++-- xorg-server/doc/c-extensions | 122 ++++++------ xorg-server/doc/dtrace/.gitignore | 5 + xorg-server/doc/smartsched | 408 +++++++++++++++++++------------------- 5 files changed, 289 insertions(+), 285 deletions(-) create mode 100644 xorg-server/doc/.gitignore create mode 100644 xorg-server/doc/dtrace/.gitignore (limited to 'xorg-server/doc') diff --git a/xorg-server/doc/.gitignore b/xorg-server/doc/.gitignore new file mode 100644 index 000000000..2ee2ac5b4 --- /dev/null +++ b/xorg-server/doc/.gitignore @@ -0,0 +1,4 @@ +*.html +*.pdf +*.ps +*.txt diff --git a/xorg-server/doc/Xserver-spec.xml b/xorg-server/doc/Xserver-spec.xml index 2b11828fc..31b6fb05d 100644 --- a/xorg-server/doc/Xserver-spec.xml +++ b/xorg-server/doc/Xserver-spec.xml @@ -680,7 +680,7 @@ pReadmask is a pointer to the information describing the descriptors that will be waited on. -In the sample server, pTimeout is a struct timeval **, and pReadmask is +In the sample server, pTimeout is a pointer, and pReadmask is the address of the select() mask for reading. @@ -688,15 +688,14 @@ The DIX BlockHandler() iterates through the Screens, for each one calling its BlockHandler. A BlockHandler is declared thus:
- void xxxBlockHandler(nscreen, pbdata, pptv, pReadmask) - int nscreen; - pointer pbdata; - struct timeval ** pptv; + void xxxBlockHandler(pScreen, pTimeout, pReadmask) + ScreenPtr pScreen; + pointer pTimeout; pointer pReadmask;
-The arguments are the index of the Screen, the blockData field -of the Screen, and the arguments to the DIX BlockHandler(). +The arguments are a pointer to the Screen, and the arguments to the +DIX BlockHandler().
Immediately after WaitForSomething returns from the @@ -721,15 +720,14 @@ The DIX WakeupHandler() calls each Screen's WakeupHandler. A WakeupHandler is declared thus:
- void xxxWakeupHandler(nscreen, pbdata, err, pReadmask) - int nscreen; - pointer pbdata; + void xxxWakeupHandler(pScreen, result, pReadmask) + ScreenPtr pScreen; unsigned long result; pointer pReadmask;
-The arguments are the index of the Screen, the blockData field -of the Screen, and the arguments to the DIX WakeupHandler(). +The arguments are the Screen, of the Screen, and the arguments to +the DIX WakeupHandler().
In addition to the per-screen BlockHandlers, any module may register @@ -1942,18 +1940,15 @@ FALSE. The scrInitProc should be of the following form:
- Bool scrInitProc(iScreen, pScreen, argc, argv) - int iScreen; + Bool scrInitProc(pScreen, argc, argv) ScreenPtr pScreen; int argc; char **argv;
-iScreen is the index for this screen; 0 for the first one initialized, -1 for the second, etc. pScreen is the pointer to the screen's new -ScreenRec. argc and argv are as before. Your screen initialize -procedure should return TRUE upon success or FALSE if the screen -cannot be initialized (for instance, if the screen hardware does not -exist on this machine). +pScreen is the pointer to the screen's new ScreenRec. argc and argv +are as before. Your screen initialize procedure should return TRUE +upon success or FALSE if the screen cannot be initialized (for + instance, if the screen hardware does not exist on this machine). This procedure must determine what actual device it is supposed to initialize. If you have a different procedure for each screen, then it is no problem. diff --git a/xorg-server/doc/c-extensions b/xorg-server/doc/c-extensions index eb33e272b..984022333 100644 --- a/xorg-server/doc/c-extensions +++ b/xorg-server/doc/c-extensions @@ -1,61 +1,61 @@ -First of all: C89 or better. If you don't have that, port gcc first. - -Use of C language extensions throughout the X server tree ---------------------------------------------------------- - -Optional extensions: -The server will still build if your toolchain does not support these -extensions, although the results may not be optimal. - - * _X_SENTINEL(x): member x of the passed structure must be NULL, e.g.: - void parseOptions(Option *options _X_SENTINEL(0)); - parseOptions("foo", "bar", NULL); /* this is OK */ - parseOptions("foo", "bar", "baz"); /* this is not */ - This definition comes from Xfuncproto.h in the core - protocol headers. - * _X_ATTRIBUTE_PRINTF(x, y): This function has printf-like semantics; - check the format string when built with - -Wformat (gcc) or similar. - * _X_EXPORT: this function should appear in symbol tables. - * _X_HIDDEN: this function should not appear in the _dynamic_ symbol - table. - * _X_INTERNAL: like _X_HIDDEN, but attempt to ensure that this function - is never called from another module. - * _X_INLINE: inline this functon if possible (generally obeyed unless - disabling optimisations). - * _X_DEPRECATED: warn on use of this function. - -Mandatory extensions: -The server will not build if your toolchain does not support these extensions. - - * named initialisers: explicitly initialising structure members, e.g.: - struct foo bar = { .baz = quux, .brian = "dog" }; - * variadic macros: macros with a variable number of arguments, e.g.: - #define DebugF(x, ...) /**/ - * interleaved code and declarations: { foo = TRUE; int bar; do_stuff(); } - - -Use of OS and library facilities throughout the X server tree -------------------------------------------------------------- - -Non-OS-dependent code can assume facilities at least as good as -the non-OS-facility parts of POSIX-1.2001. Ideally this would -be C99, but even gcc+glibc doesn't implement that yet. - -Unix-like systems are assumed to be at least as good as UNIX03. - -Linux systems must be at least 2.4 or later. As a practical matter -though, 2.4 kernels never receive any testing. Use 2.6 already. - -TODO: Solaris. - -TODO: *BSD. - -Code that needs to be portable to Windows should be careful to, -well, be portable. Note that there are two Windows ports, cygwin and -mingw. Cygwin is more or less like Linux, but mingw is a bit more -restrictive. TODO: document which versions of Windows we actually care -about. - -OSX support is generally limited to the most recent version. Currently -that means 10.5. +First of all: C89 or better. If you don't have that, port gcc first. + +Use of C language extensions throughout the X server tree +--------------------------------------------------------- + +Optional extensions: +The server will still build if your toolchain does not support these +extensions, although the results may not be optimal. + + * _X_SENTINEL(x): member x of the passed structure must be NULL, e.g.: + void parseOptions(Option *options _X_SENTINEL(0)); + parseOptions("foo", "bar", NULL); /* this is OK */ + parseOptions("foo", "bar", "baz"); /* this is not */ + This definition comes from Xfuncproto.h in the core + protocol headers. + * _X_ATTRIBUTE_PRINTF(x, y): This function has printf-like semantics; + check the format string when built with + -Wformat (gcc) or similar. + * _X_EXPORT: this function should appear in symbol tables. + * _X_HIDDEN: this function should not appear in the _dynamic_ symbol + table. + * _X_INTERNAL: like _X_HIDDEN, but attempt to ensure that this function + is never called from another module. + * _X_INLINE: inline this functon if possible (generally obeyed unless + disabling optimisations). + * _X_DEPRECATED: warn on use of this function. + +Mandatory extensions: +The server will not build if your toolchain does not support these extensions. + + * named initialisers: explicitly initialising structure members, e.g.: + struct foo bar = { .baz = quux, .brian = "dog" }; + * variadic macros: macros with a variable number of arguments, e.g.: + #define DebugF(x, ...) /**/ + * interleaved code and declarations: { foo = TRUE; int bar; do_stuff(); } + + +Use of OS and library facilities throughout the X server tree +------------------------------------------------------------- + +Non-OS-dependent code can assume facilities at least as good as +the non-OS-facility parts of POSIX-1.2001. Ideally this would +be C99, but even gcc+glibc doesn't implement that yet. + +Unix-like systems are assumed to be at least as good as UNIX03. + +Linux systems must be at least 2.4 or later. As a practical matter +though, 2.4 kernels never receive any testing. Use 2.6 already. + +TODO: Solaris. + +TODO: *BSD. + +Code that needs to be portable to Windows should be careful to, +well, be portable. Note that there are two Windows ports, cygwin and +mingw. Cygwin is more or less like Linux, but mingw is a bit more +restrictive. TODO: document which versions of Windows we actually care +about. + +OSX support is generally limited to the most recent version. Currently +that means 10.5. diff --git a/xorg-server/doc/dtrace/.gitignore b/xorg-server/doc/dtrace/.gitignore new file mode 100644 index 000000000..0d40e0d22 --- /dev/null +++ b/xorg-server/doc/dtrace/.gitignore @@ -0,0 +1,5 @@ +*.html +*.pdf +*.ps +*.txt +*.db diff --git a/xorg-server/doc/smartsched b/xorg-server/doc/smartsched index 057a759fd..466408431 100644 --- a/xorg-server/doc/smartsched +++ b/xorg-server/doc/smartsched @@ -1,204 +1,204 @@ - Client Scheduling in X - Keith Packard - SuSE - 10/28/99 - -History: - -Since the original X server was written at Digital in 1987, the OS and DIX -layers shared responsibility for scheduling the order to service -client requests. The original design was simplistic; under the maximum -first make it work, then make it work well, this was a good idea. Now -that we have a bit more experience with X applications, it's time to -rethink the design. - -The basic dispatch loop in DIX looks like: - - for (;;) - { - nready = WaitForSomething (...); - while (nready--) - { - isItTimeToYield = FALSE; - while (!isItTimeToYield) - { - if (!ReadRequestFromClient (...)) - break; - (execute request); - } - } - } - -WaitForSomething looks like: - - for (;;) - if (ANYSET (ClientsWithInput)) - return popcount (ClientsWithInput); - select (...) - compute clientsReadable from select result; - return popcount (clientsReadable) - } - -ReadRequestFromClient looks like: - - if (!fullRequestQueued) - { - read (); - if (!fullRequestQueued) - { - remove from ClientsWithInput; - timesThisConnection = 0; - return 0; - } - } - if (twoFullRequestsQueued) - add to ClientsWithInput; - - if (++timesThisConnection >= 10) - { - isItTimeToYield = TRUE; - timesThisConnection = 0; - } - return 1; - -Here's what happens in this code: - -With a single client executing a stream of requests: - - A client sends a packet of requests to the server. - - WaitForSomething wakes up from select and returns that client - to Dispatch - - Dispatch calls ReadRequestFromClient which reads a buffer (4K) - full of requests from the client - - The server executes requests from this buffer until it emptys, - in two stages -- 10 requests at a time are executed in the - inner Dispatch loop, a buffer full of requests are executed - because WaitForSomething immediately returns if any clients - have complete requests pending in their input queues. - - When the buffer finally emptys, the next call to ReadRequest - FromClient will return zero and Dispatch will go back to - WaitForSomething; now that the client has no requests pending, - WaitForSomething will block in select again. If the client - is active, this select will immediately return that client - as ready to read. - -With multiple clients sending streams of requests, the sequence -of operations is similar, except that ReadRequestFromClient will -set isItTimeToYield after each 10 requests executed causing the -server to round-robin among the clients with available requests. - -It's important to realize here that any complete requests which have been -read from clients will be executed before the server will use select again -to discover input from other clients. A single busy client can easily -monopolize the X server. - -So, the X server doesn't share well with clients which are more interactive -in nature. - -The X server executes at most a buffer full of requests before again heading -into select; ReadRequestFromClient causes the server to yield when the -client request buffer doesn't contain a complete request. When -that buffer is executed quickly, the server spends a lot of time -in select discovering that the same client again has input ready. Thus -the server also runs busy clients less efficiently than is would be -possible. - -What to do. - -There are several things evident from the above discussion: - - 1 The server has a poor metric for deciding how much work it - should do at one time on behalf of a particular client. - - 2 The server doesn't call select often enough to detect less - aggressive clients in the face of busy clients, especially - when those clients are executing slow requests. - - 3 The server calls select too often when executing fast requests. - - 4 Some priority scheme is needed to keep interactive clients - responding to the user. - -And, there are some assumptions about how X applications work: - - 1 Each X request is executed relatively quickly; a request-granularity - is good enough for interactive response almost all of the time. - - 2 X applications receiving mouse/keyboard events are likely to - warrant additional attention from the X server. - -Instead of a request-count metric for work, a time-based metric should be -used. The server should select a reasonable time slice for each client -and execute requests for the entire timeslice before yielding to -another client. - -Instead of returning immediately from WaitForSomething if clients have -complete requests queued, the server should go through select each -time and gather as many ready clients as possible. This involves -polling instead of blocking and adding the ClientsWithInput to -clientsReadable after the select returns. - -Instead of yielding when the request buffer is empty for a particular -client, leave the yielding to the upper level scheduling and allow -the server to try and read again from the socket. If the client -is busy, another buffer full of requests will already be waiting -to be delivered thus avoiding the call through select and the -additional overhead in WaitForSomething. - -Finally, the dispatch loop should not simply execute requests from the -first available client, instead each client should be prioritized with -busy clients penalized and clients receiving user events praised. - -How it's done: - -Polling the current time of day from the OS is too expensive to -be done at each request boundary, so instead an interval timer is -set allowing the server to track time changes by counting invocations -of the related signal handler. Instead of using the wall time for -this purpose, the process CPU time is used instead. This serves -two purposes -- first, it allows the server to consume no CPU cycles -when idle, second it avoids conflicts with SIGALRM usage in other -parts of the server code. It's not without problems though; other -CPU intensive processes on the same machine can reduce interactive -response time within the X server. The dispatch loop can now -calculate an approximate time value using the number of signals -received. The granularity of the timer sets the scheduling jitter, -at 20ms it's only occasionally noticeable. - -The changes to WaitForSomething and ReadRequestFromClient are -straightforward, adjusting when select is called and avoiding -setting isItTimeToYield too often. - -The dispatch loop changes are more extensive, now instead of -executing requests from all available clients, a single client -is chosen after each call to WaitForSomething, requests are -executed for that client and WaitForSomething is called again. - -Each client is assigned a priority, the dispatch loop chooses the -client with the highest priority to execute. Priorities are -updated in three ways: - - 1. Clients which consume their entire slice are penalized - by having their priority reduced by one until they - reach some minimum value. - - 2. Clients which have executed no requests for some time - are praised by having their priority raised until they - return to normal priority. - - 3. Clients which receive user input are praised by having - their priority rased until they reach some maximal - value, above normal priority. - -The effect of these changes is to both improve interactive application -response and benchmark numbers at the same time. - - - - - -$XFree86: $ + Client Scheduling in X + Keith Packard + SuSE + 10/28/99 + +History: + +Since the original X server was written at Digital in 1987, the OS and DIX +layers shared responsibility for scheduling the order to service +client requests. The original design was simplistic; under the maximum +first make it work, then make it work well, this was a good idea. Now +that we have a bit more experience with X applications, it's time to +rethink the design. + +The basic dispatch loop in DIX looks like: + + for (;;) + { + nready = WaitForSomething (...); + while (nready--) + { + isItTimeToYield = FALSE; + while (!isItTimeToYield) + { + if (!ReadRequestFromClient (...)) + break; + (execute request); + } + } + } + +WaitForSomething looks like: + + for (;;) + if (ANYSET (ClientsWithInput)) + return popcount (ClientsWithInput); + select (...) + compute clientsReadable from select result; + return popcount (clientsReadable) + } + +ReadRequestFromClient looks like: + + if (!fullRequestQueued) + { + read (); + if (!fullRequestQueued) + { + remove from ClientsWithInput; + timesThisConnection = 0; + return 0; + } + } + if (twoFullRequestsQueued) + add to ClientsWithInput; + + if (++timesThisConnection >= 10) + { + isItTimeToYield = TRUE; + timesThisConnection = 0; + } + return 1; + +Here's what happens in this code: + +With a single client executing a stream of requests: + + A client sends a packet of requests to the server. + + WaitForSomething wakes up from select and returns that client + to Dispatch + + Dispatch calls ReadRequestFromClient which reads a buffer (4K) + full of requests from the client + + The server executes requests from this buffer until it emptys, + in two stages -- 10 requests at a time are executed in the + inner Dispatch loop, a buffer full of requests are executed + because WaitForSomething immediately returns if any clients + have complete requests pending in their input queues. + + When the buffer finally emptys, the next call to ReadRequest + FromClient will return zero and Dispatch will go back to + WaitForSomething; now that the client has no requests pending, + WaitForSomething will block in select again. If the client + is active, this select will immediately return that client + as ready to read. + +With multiple clients sending streams of requests, the sequence +of operations is similar, except that ReadRequestFromClient will +set isItTimeToYield after each 10 requests executed causing the +server to round-robin among the clients with available requests. + +It's important to realize here that any complete requests which have been +read from clients will be executed before the server will use select again +to discover input from other clients. A single busy client can easily +monopolize the X server. + +So, the X server doesn't share well with clients which are more interactive +in nature. + +The X server executes at most a buffer full of requests before again heading +into select; ReadRequestFromClient causes the server to yield when the +client request buffer doesn't contain a complete request. When +that buffer is executed quickly, the server spends a lot of time +in select discovering that the same client again has input ready. Thus +the server also runs busy clients less efficiently than is would be +possible. + +What to do. + +There are several things evident from the above discussion: + + 1 The server has a poor metric for deciding how much work it + should do at one time on behalf of a particular client. + + 2 The server doesn't call select often enough to detect less + aggressive clients in the face of busy clients, especially + when those clients are executing slow requests. + + 3 The server calls select too often when executing fast requests. + + 4 Some priority scheme is needed to keep interactive clients + responding to the user. + +And, there are some assumptions about how X applications work: + + 1 Each X request is executed relatively quickly; a request-granularity + is good enough for interactive response almost all of the time. + + 2 X applications receiving mouse/keyboard events are likely to + warrant additional attention from the X server. + +Instead of a request-count metric for work, a time-based metric should be +used. The server should select a reasonable time slice for each client +and execute requests for the entire timeslice before yielding to +another client. + +Instead of returning immediately from WaitForSomething if clients have +complete requests queued, the server should go through select each +time and gather as many ready clients as possible. This involves +polling instead of blocking and adding the ClientsWithInput to +clientsReadable after the select returns. + +Instead of yielding when the request buffer is empty for a particular +client, leave the yielding to the upper level scheduling and allow +the server to try and read again from the socket. If the client +is busy, another buffer full of requests will already be waiting +to be delivered thus avoiding the call through select and the +additional overhead in WaitForSomething. + +Finally, the dispatch loop should not simply execute requests from the +first available client, instead each client should be prioritized with +busy clients penalized and clients receiving user events praised. + +How it's done: + +Polling the current time of day from the OS is too expensive to +be done at each request boundary, so instead an interval timer is +set allowing the server to track time changes by counting invocations +of the related signal handler. Instead of using the wall time for +this purpose, the process CPU time is used instead. This serves +two purposes -- first, it allows the server to consume no CPU cycles +when idle, second it avoids conflicts with SIGALRM usage in other +parts of the server code. It's not without problems though; other +CPU intensive processes on the same machine can reduce interactive +response time within the X server. The dispatch loop can now +calculate an approximate time value using the number of signals +received. The granularity of the timer sets the scheduling jitter, +at 20ms it's only occasionally noticeable. + +The changes to WaitForSomething and ReadRequestFromClient are +straightforward, adjusting when select is called and avoiding +setting isItTimeToYield too often. + +The dispatch loop changes are more extensive, now instead of +executing requests from all available clients, a single client +is chosen after each call to WaitForSomething, requests are +executed for that client and WaitForSomething is called again. + +Each client is assigned a priority, the dispatch loop chooses the +client with the highest priority to execute. Priorities are +updated in three ways: + + 1. Clients which consume their entire slice are penalized + by having their priority reduced by one until they + reach some minimum value. + + 2. Clients which have executed no requests for some time + are praised by having their priority raised until they + return to normal priority. + + 3. Clients which receive user input are praised by having + their priority rased until they reach some maximal + value, above normal priority. + +The effect of these changes is to both improve interactive application +response and benchmark numbers at the same time. + + + + + +$XFree86: $ -- cgit v1.2.3