Introduction

This documents the differences of the pointer acceleration in the X Server 1.6 vs. current (git master), and also serves to keep some info about this specific implementation, since some things changed substantially. The following holds for server 1.6:

  1. There are no device properties for pointer acceleration

  2. velocity approximation is done using a superseded algorithm

This mainly means the only way to fine-tune pointer acceleration is via xorg.conf or hal. However, classic X controls do apply:

xset m 18/10 0

or equivalent

xinput set-ptr-feedback <device> 0 18 10

The same goes for velocity scaling, profile, and the other basic options described in the main document.

Fine-tuning velocity approximation

The following options affect how the velocity estimate is calculated and require some understanding of the algorithm.

FilterHalflife [real]

The halflife of the weighting applied in order to approximate velocity. Higher values exhibit more integrating behaviour, introducing some lag but also may feel smoother. Less is more responsive, but less smooth. Higher values may need to be accompanied by a looser coupling, or it won't come to effect. Default: 20 ms.

FilterChainProgression [real]

FilterChainLength [integer]

You can use this for high-quality velocity estimates. In a few tests, this approach made the other OS's acceleration feel jerky :)

If you want to try a filter chain, set ?FilterHalflife to some small value. For example, for a device reporting every 10 ms, 5 to 10. The first filter will use this halflife, which should be very responsive. The next ?ChainLenght -1 filters will be progressively more integrating (thus less responsive). The progression should be high enough so the last filter can average some motion reports. Up to ~300 ms is sensible. You may want to tighten coupling in such a setup.

Example filter chain halflifes for length 4, halflife 5, progression 2:

  * 5  10  20  40 

Of course, more filters are computationally more intensive. Up to 8 are allowed.

VelocityReset [integer]

Specifies after how many milliseconds of inactivity non-visible state (i.e. subpixel position) is discarded. This affects three issues:

  1. Two mouse strokes do not have any effect on each other if they are VelocityReset miliseconds from each other. This would be negligible though.
  2. Velocity estimate remains correct within this time if the pointer/X is stuck for a short moment, not querying the moving device.
  3. slow movements are guessed correctly if all device movement events are inside this time from each other. An increment might be neccessary to fully take advantage of adaptive deceleration.

Default 300 ms.

VelocityCoupling [real]

Specifies coupling, a feature ensuring responsivity by determining if the filter's velocity estimate is a valid match for the current velocity. The weighted estimate is deemed valid if it differs from current below this factor. Precisely:

fabs(cur_velocity - fitered_velocity) <= ((cur_velocity + fitered_velocity) * coupling)

Should it be deemed invalid, velocity will be overridden by the current velocity and filter data is discarded.

Higher values mean it is more likely that filtered velocity is deemed valid. IOW: a lower value tightens coupling, a higher value loosens coupling.

Default is 0.25, or 25%.

Softening [boolean]

Tweaks motion deltas from device before applying acceleration to smooth out rather constant moves. Tweaking is always below device precision to make sure it doesn't get in the way. Also, when ?ConstantDeceleration is used, Softening is not enabled by default because this provides better (i.e. real) subpixel information.

AccelerationProfileAveraging (boolean)

By default, acceleration profiles are averaged between the previous event's velocity estimate and the current one. This is meant to improve predictability. However it has only small impact (practically zero for linear profiles), and can be viewed as inceasing response times, so you can save some cycles if you care.

AccelerationScheme [string]

Select Scheme. All other options only apply to the predictable scheme (default).
predictable
* the scheme discussed here (default)
lightweight
* previous scheme (i.e. exactly as in X server 1.5 and before). If you prefer saving some cycles over increased useablilty, choose this.
none
* disable acceleration/deceleration

Technical details

Driver side API

In general, a driver does not need to act in any way. The acceleration is initialized during dix's ?InitValuatorClassDeviceStruct, user settings on acceleration are loaded when a driver calls xf86InitValuatorDefaults. These are already called by about every driver. In general, driver-specific interaction should be before xf86IVD, so user settings take precedence.

But of course it depends on what you want. This small API essentially lets the driver have its say on acceleration and scaling related issues. Proper use could improve the user experience, e.g. by avoiding to double-accelerate synaptics pads.

The relevant header is ptrveloc.h. Most interaction requires a ?DeviceVelocityPtr. Use

   GetDevicePredictableAccelData(DeviceIntPtr)

to obtain it (may return null). After xf86IVD, this may be used as an indicator on whether the predictable scheme is in effect.

Reporting rate

If a driver knows the anticipated reporting rate of the device in advance, it might choose to override the default ?VelocityScaling to improve velocity estimates:

  your_velocityPtr->corr_mul = 1000/rate;

This is especially worth the effort when the rate differs significantly from the default 100hz.

Scaling issues

Also, if your device has very high precision, you can postpone downscaling:

  your_velocityPtr->const_acceleration *= 0.5f;

This makes the full device precision available to guess velocity, and has potentially more benefits (like not distributing stale remainders all along the event loop). Plus, you don't have to do downscaling yourself :)

Caveat: Since there is no error correction on dix side, a driver whose device has some error in the signal (like synaptics, on my laptop at least) should downscale just enough for the error to become insignificant. Any surplus scaling might still be done in dix then.

Device-specific profile

A hardware driver may use its knowledge about the device to create a special acceleration profile. This can be installed using

  SetDeviceSpecificAccelerationProfile()

In order to make it the default for the device, simply call

   SetAccelerationProfile(velocityPtr, AccelProfileDeviceSpecific);

The user may always select it using profile 1.

Leave me alone

If you ultimately want no server-side acceleration to be performed, call

InitPointerAccelerationScheme(dev, PtrAccelNoOp). 

This disables constant deceleration too.

Velocity approximation

Velocity determines the amount of acceleration. Getting that right is crucial to actually improve things. Moreover, the general algorithm has a lot of corner cases that work out poor. Of course there are fixes, so in the end, it's a bit complicated:

Given available data, we calculate dots per millisecond. A correction is applied to account for slow diagonal moves which are reported as alternating horizontal/vertical mickeys. These would otherwise be guessed 41% too fast.

Dots per millisecond is quite intractable in integers. X controls are integer in part, and changing that would break too much. Thus, we multiply by a configurable factor to arrive at values where the usual X controls can be applied.

Also, constant deceleration is multiplied into the current velocity. This intends to make acceleration depend on (unaccelerated) on-screen velocity, so one doesn't need to accomodate for different constant decelerations later in the acceleration controls (basically for laptops).

This velocity is then fed into a chain of filters. Well, by default, it is one filter, at max there are 8. This means the velocity is tracked by potentially several filters, each with different response.

After that, the most integrating filter response in range with the current velocity is selected. This ensures good responsivity and a likely precise velocity estimate.

However, even the one filter used by default is enough for most people/devices. The reason: Typically during begin and end of a mouse stroke, filtering is simply not appropriate. Therefore, a coupling is employed to override velocity if filter responses seem inappropriate. Basically, you can either rely on coupling to provide good responsivity or have a filter suited to every phase of a stroke and loosen coupling.

Additional trickery

After some short inactivity time (300 ms by default), the background data is discarded (called non-visible state (reset) in the patch). This is to ensure no two distinct strokes affect each other.

When this happens, the velocity estimate is usually too low. To fix this, the acceleration performed is always evaluted as unity (IOW, the profile is skipped). This intends to make sure the mickey isn't being scaled to near-zero, so it is actually reflected on the screen. This does not override constant deceleration.

One event after a non-visible state reset, the current velocity is 'stuffed' into the filter chain. Stuffing means every filter stage is overridden. Since this is the first time within a stroke we have a reasonable velocity, this is expected to somewhat increase precision.

Filter design

This section might be inaccurate due the author's limited math vocabulary in english. I apologize should this cause inconveniences.

To aid in velocity approximation, the input report's velocity is filtered. This section describes the filter design, how it is driven is described above.

First off, we're not talking about a digital filter in the sense of signal theory. Signal theory doesn't apply here mainly because there is no constant amount of time between two reports. There is no such thing as a frequency response.

The filtering is organized into one (by default) to a maximum of 8 stages. At higher levels they are expected to be in order least to most integrating.

Each stage can roughly be compared to an IIR filter of first order:

current = v0 * a0 + v1 * a1

where v0 is the current velocity (this is, delta way by delta time), and v1 the previous current.

However, we vary the coefficients by delta t using a power-law function. To do so, each stage consists of the current and an additional half-life. The half-life is used to derive the coefficients:

a1 = 0.5 ^ (delta_t / half_life)
a0 = 1 - a1

The a1 function is in fact the integral of our true weighting function

w = 0.693... * 0.5 ^ (delta_t / half_life)

whose integral from zero to infinity is unity. Therefore, a0 is trivial to obtain.

Obviously this is, in filter speech, BIBO-stable (at least for delta t > 0 and a sane halflife). IOW, filter output is stable as long as its input is, nothing lost.

Another way of saying it: After half-life passed, the filter current will be determined 50% by current velocity and 50% by what was before. Since power functions exhibit scale invariance, this actually yields an equivalent result to weighting the individual values over their valid range (which is, from the corresponding mickey's time to the next) by the function w above.

In most cases, this boils down to

  • 1 lookup
  • 1 sub
  • 2 mul
  • 1 add per filter.

The net effect is that the filter current will always draw towards the current velocity. It will do so more with

  • higher delta t
  • lower halflife When employing a chain of such filters, there is likely always one that has a good match to the device's actual current velocity. The matter becomes deciding which.

Interaction with synaptics / about any smart driver

I noticed two important things to consider when using the synaptics driver (or any other driver doing substantially more than decoding mickeys):

It seems synaptics driver implements its own acceleration, which can be switched off. Two ways of accelerating the pointer certainly don't do good. This can be accomplished by setting 2 options, '?MaxSpeed' and '?MinSpeed' to the same.

I chose ?MaxSpeed = ?MinSpeed = 1, which seemingly made the native touchpad resolution available to X, which in turn was far too responsive. I had to apply a ?ConstantDeceleration of 8 to work with it. This also makes the full device precision available to guess velocity.

Also, I found increasing ?WeightingDecay on the touchpad to feel more comfortable. This is indicative of a small problem: the velocity estimation has no means of error correction, it basically requires the driver to deliver error-free data. A good compromise is thus to downscale only so much in the driver as to make the contained error insignificant, and leave the rest to dix.

As said, you have the choice on who scales down, and the driver could well be the better choice because it knows its stuff. But if it isn't better, choosing X to scale down will result in better velocity approximation, which may be advantageous.