Index

Introduction

The predictable pointer acceleration code is an effort to remove the deficiencies of the previous acceleration code. It is intended as a drop-in replacement. Most users probably won't note it, which is ensured by reusing existing controls and aligning closely to them.

It provides nice features like downscaling (constant deceleration) independent of the device driver, adaptive deceleration, different acceleration profiles, and more.

However, the main difference is predictability. See Postulate for an explanation.

The behaviour of the code can be configured on a per-device basis in xorg.conf, HAL fdi's or at runtime. A user may also choose to use the previous method or no pointer acceleration at all.

It also features an optional interface to device drivers to coordinate on acceleration, should one provide its own.

See here for notes regarding Server 1.6

Postulate

When the pointer gets accelerated with respect to device motion, it becomes important for the user to be able to predict the acceleration. Without this, the user is unable to intuitively use his device.

Predictability depends on knowledge. The primary information the user's brain has to build its knowledge about the applied translation from hand (device) to eye (screen) is the velocity of motion. In order to not disturb this feedback loop, it is anticipated that the most critical part is doing a sophisticated estimate for the velocity of the device in the users hand.

Based on a good estimate, acceleration profiles can be easily learned by the brain and thus be predicted. Because this is an intuitive process, easing it just 'feels better'.

Intent

Focus was given to the following aspects:

  • improving heavy-load behaviour (no overshooting pointer)
  • enabling accuracy and fluid motion with a wide range of pointing devices
  • providing a better 'feel' for pointing devices
  • similarity in behaviour to the previous code
  • don't introduce any lag in the process

Basic concepts

The code serves 3 complementary functions:

  1. provide a sophisticated ballistic velocity estimate to improve the relation between velocity (of the device) and acceleration

  2. make arbitrary acceleration profiles possible

  3. decelerate by two means (constant and adaptive) if enabled

Important concepts are:

  • Scheme
    • which defines the basic algorithm. It can be summarized as 'means of translating relative device valuators in-place'. 'Predictable' refers to the new method (discussed here), 'lightweight' to the old one, and there's 'none'.
  • Profile
    • which returns an acceleration factor (not to be confused with the acceleration control) for a given velocity estimate. Users are expected to choose the one they can work best with. The point of profiles is in the following issue: There is a single 'profile' that is the easiest for all users to learn: unaccelerated.

However this isn't too helpful; many people prefer an accelerated pointer. X has traditionally had acceleration activated by default, presumably for this reason. Since there is no single 'best' accelerated profile, this is a very individual option.

Therefore, having a variety of profiles is essential. 5 of them are currently implemented (one can't really count 'classic' or 'device-dependent' since they are not profiles themselves).

The 'classic' profile (default) is intended to perform old-style function selection (threshold =/!= 0) so most users won't note a difference. There is a special 'device dependent' profile (Nr. 1) reserved should drivers want to specify one. See here for a short overview.

Problems of the previous method

The previous method ('lightweight scheme') did not have a velocity concept. It is based directly on the motion report's values ('mickey'), which at best correlates to velocity. Since the brain knows about velocity, not mickeys, this is disturbing.

For polynomial acceleration: On slow movements, this code infers 3 'velocities': 1, 1.41, and 2. As 1 remains unaccelerated (or is even slowed down), we're left with just two acceleration levels. It is hard to foresee what exactly will happen.

Worse when using the simple accelerated/unaccelerated function: Acceleration either comes to effect or it doesn't, providing no predictability around the threshold. The higher acceleration, the less predictability you get.

If a system is under load, a device (or the driver/kernel) may accumulate its movement delta while the system does not query the device. This invalidates the assumed mickey-velocity relationship, causing irrational high cursor movement subsequently. This is particularly annoying when caused by focus changes during a mouse stroke (modern toolkits luckily don't exhibit this behaviour anymore).

Some people own very responsive devices, creating a need to reduce speed on precise tasks or even in general. This often required driver-side or hardware support which was missing.

With the simple acceleration scheme, acceleration is more sensitive on diagonal movements than on axis-aligned ones.

These problems result in a reduced ability for our intuition to predict a matching hand movement for desired screen movement, thus causing more correctional moves than necessary. Put simply, the mouse 'feels bad'.

Benefits of the new code

Mostly, the polynomial acceleration becomes more usable. It can be used with higher acceleration coefficients (acc > 2), still providing enough control. But also the classic acceleration should become less jumpy since it now ascends softly towards accelerated motion.

Users with too precise (or fast) devices can slow them without losing precision, independent of hardware driver support. Also, an acceleration function may decelerate on slow movements, giving (sub)pixel precision without sacrificing on pointer speed.

The code is more robust towards different devices: One could imagine a mouse reporting very often, but only 1 dot per event. Old code would not accelerate such a device at all. While this is a theoretical case, there is robustness against jitter in device event frequency (as could be caused by system load, for example). In general, the number of problematic devices (wrt acceleration) is expected to shrink significantly.

A driver which wants to roll its own acceleration now can do so without risk of distributing stale data along the event chain. See 'Driver side' below.

Users disliking all this can switch it off.

Basic useage

Type e.g.

xset m 18/10 0

to set a moderate polynomial acceleration (threshold = 0, acceleration = 1.8). The xinput tool has equivalent functionality to xset m:

xinput set-ptr-feedback <device> 0 18 10

would be the equvalent to the xset example. This works without the patch too, but the previous acceleration scheme is not very nice.

More features are available via device properties or config files.

Configuration

The defaults should suffice if you had no problems before, and feel quite similar. The settings discussed here are the coarse and the very subtle settings, not the usual xset or GUI panel controls you might know.

Because this is a drop-in enhancement, the usual threshold and acceleration controls work as expected and are not planned to be replaced. However, different profiles might react different to them.

The settings described below are all device-specific. You may set them in an appropriate HAL file (typically /etc/hal/fdi/policy/10-x11-input.fdi) using

<match key="info.capabilities" contains="input.mouse">
   <merge key="input.x11_options.AdaptiveDeceleration" type="string">2</merge>
</match>

or similar. Type 'string' is required, since the server won't accept others.

If you still use a xorg.conf, add options in the approriate "?InputDevice" section in xorg.conf. Usually it looks like:

Section "InputDevice"
        Identifier  "Mouse0"
        Driver      "mouse"
        [...]
EndSection

For example to enable the adaptive deceleration feature, put in a line reading

        Option      "AdaptiveDeceleration" "2"

or similar.

Some options are runtime-adjustable. The xinput tool can change them:

xinput set-prop "My Mouse" "Device Accel Profile" 2

activates profile 2 on a device named "My Mouse". xinput is quite self-explanatory; however device properties have different names than options, and not all options are available as device properties. This is available in the X.org server 1.7 and higher.

Scenarios

If your mouse moves far too fast, ?ConstantDeceleration is your friend. Set to 2 or higher to divide speed accordingly. This will not discard precision (at least only on nv-reset, see Velocity approximation or below).

If your high-performance device does not repond well to acceleration, you might need to reduce velocity scaling first.

If you like the speed but need some more control at pixel-level, you should set ?AdaptiveDeceleration to 2 or more. This allows to decelerate slow movements down to the given factor. You might want to keep nv-resets away by setting ?VelocityReset to e.g. 500 ms, and maybe tweak velocity scaling to tune results.

If you want only adaptive deceleration and have the default profile (or simple, for that matter), the easiest way is '> xset m 1 1'.

If you are picky about a smooth kick-in of acceleration, for example to ease doing art, I suggest using profile 2 or 3 (the latter being equivalent to xset m x/y 0), enabling adaptive deceleration and tweaking VelocityScale (in that order). Maybe playing with velocity ?AbsDiff/RelDiff also yields some improvement.

If you want no acceleration, use the 'None' profile. You can set it live in xinput or via static configuration.

If you're not even willing the spend CPU cycles on this, set the ?AccelerationScheme to none.

Options

AdaptiveDeceleration [integer]

device property: Device Accel Adaptive Deceleration

Allows the acceleration profile to actually decelerate the pointer by the given factor. Adaptive deceleration is a good tool allowing precise pointing, while maintaining pointer speed in general. A good thing even if you don't need it badly.

Note however that for some profiles and/or acceleration control settings, adaptive deceleration may not come to effect. For example, the polynomial profile with an acceleration control setting of 1.0 will neither accelerate nor decelerate. Thus, there is no deceleration that could be allowed for.

Default is 1 (no deceleration), mainly not to impose changes on unaware users.

ConstantDeceleration [integer]

device property: Device Accel Constant Deceleration

Constantly decelerates the mouse by given factor.

Using ?ConstantDeceleration should be preferred over corresponding device driver options (if any) since these retain data which could worsen the prediction of device velocity. An exception is the case where device data contains some error, which you may not want to affect velocity estimation.

Implemetation note: This factor is applied onto the device velocity estimate, so the actual acceleration relates more to unaccelerated on-screen motion than to device motion.

AccelerationProfile [integer]

device property: Device Accel Profile

Select the acceleration profile by number. Default is 0, except if the driver says otherwise (none currently does).

In this section, threshold and acceleration specify the corresponding X controls (xset m acc_num/acc_den thres).

  1. classic (the default) similar to old behaviour, but more predictable. Selects between 'polynomial' and 'simple' based on threshold =/!= 0.

-1 none

  * no velocity-dependent pointer acceleration or deceleration. If constant deceleration is also unused, motion processing is suppressed, saving some cycles. 

1. device-dependent

  * available if the hardware driver installs it. May be coming for synaptics. 

2. polynomial

  * Scales polynomial: velocity serves as the coefficient, acceleration being the exponent. Very useable, the recommended profile. 

3. smooth linear

  * scales mostly linear, but with a smooth (non-linear) start. 

4. simple

  * Transitions between accelerated/unaccelerated, but with a smooth transition range. This has the fundamental problem of accelerating on two niveaus, on which acceleration stays independent of velocity. Traditionally the default however. 

5. power

  * accelerates by a power function. velocity is the exponent here. Adheres to threshold. Will easily get hard to control, so it is important you have properly tuned your velocity estimation. 

6. linear

  * just linear to velocity and acceleration. Simple and clean. 

7. limited

  * smoothly ascends to acceleration, maxing out at threshold, where it becomes flat (is limited). 

VelocityScale [real] or

ExpectedRate [real (Hz)]

device property: Device Accel Velocity Scaling

In short, this controls sensitivity of acceleration.

It is not a direct speed control like constant deceleration. It scales the velocity estimate, which may or may not have an effect. That depends solely on the profile.

This setting is designed to be device-dependent, i.e. you set it once to match your device, then modify behaviour using the established X controls (e.g. xset m nom/den thr) or the profile.

It is important to note there is no 'correct' factor, just one that bears the nice property of matching to X controls just like the old code did. The ?ExpectedRate option can be used to set velocity scale according to this criteria. Be aware that this is not a good criteria for high-rate (>500hz) devices.

You may need this setting if you find it hard to use even moderate acceleration settings/profiles (indicates scale is too high) or you don't seem to get any acceleration (indicates scale is too low).

Default is 10, which is suitable for devices reporting at approximately 100hz. The relation between the two ways to set scaling is:

VelocityScale = 1000/ExpectedRate

Currently, no attempt is made to guess the rate. Thus, high-rate (i.e. gaming) devices may be problematic to accelerate without this tweak. See below for some background info.

Be aware that constant deceleration is also multiplied into the velocity estimate, so avoid to mess with both at once.

Advanced options

The following options affect how the velocity estimate is calculated and require some understanding of the algorithm.

VelocityTrackerCount [integer]

The number of mickeys tracked. Most likely won't buy you anything to tweak it, but if, you'd best stay between 4 and 40. Default 16.

VelocityInitialRange [integer]

The initial velocity is comprised of results up to this offset. 1, the default, means initial velocity will be calculated from the first two mickeys for the most time. 0 may buy you a tiny bit of response, but increases the likelihood of jumps in acceleration. For mice reporting very often (>250 hz), larger values may make sense.

VelocityAbsDiff [real]

The absolute difference between an initial velocity and the resulting velocity allowed. Default 1.

VelocityRelDiff [real]

The relative difference between an initial velocity and the resulting velocity allowed. Default 0.2.

fabs(resulting_velocity - tracker_velocity) <= ((resulting_velocity + tracker_velocity) * relative_difference)

VelocityReset [integer]

Specifies after how many milliseconds of inactivity non-visible state (i.e. subpixel position) is discarded. This affects three issues:

1) Two mouse strokes do not have any effect on each other if they are

  * <span class="createlink"><a href="https://secure.freedesktop.org/write/xorg/ikiwiki.cgi?do=create&amp;from=Development%2FDocumentation%2FPointerAcceleration&amp;page=VelocityReset" rel="nofollow">?</a>[VelocityReset</span>] miliseconds from each other. This would be neglible though. 

2) Velocity estimate remains correct within this time if the pointer/X is

  * stuck for a short moment, not querying the moving device. 

3) slow movements are guessed correctly if all device movement events are

  * inside this time from each other. An increment might be neccessary to fully take advantage of adaptive deceleration. 

Default 300 ms.

Softening [boolean]

Tweaks motion deltas from device before applying acceleration to smooth out rather constant moves. Tweaking is always below device precision to make sure it doesn't get in the way. Also, when ?ConstantDeceleration is used, Softening is not enabled by default because this provides better (i.e. real) subpixel information.

AccelerationProfileAveraging (boolean)

By default, acceleration profiles are averaged between the previous event's velocity estimate and the current one. This is meant to improve predictability. However it has only small impact (practically zero for linear profiles), and can be viewed as inceasing response times, so you can save some cycles if you care.

AccelerationScheme [string]

Select Scheme. All other options only apply to the predictable scheme (default).
predictable
* the scheme discussed here (default)
lightweight
* previous scheme (i.e. exactly as in X server 1.5 and before). If you prefer saving some cycles over increased useablilty, choose this.
none
* disable acceleration/deceleration

Technical details

Acceleration profiles (for the inclined programmer)

Acceleration profiles translate device velocity into an acceleration to be imposed on the pointer. X.Org previously offered two functions: A simple accelerated/unaccelerated switch and polynomial. They are selected somewhat strange through the threshold control: threshold = 0 means polynomial, simple otherwise.

The simple acceleration function is now continuous, and the polynomial maintains f(1) = 1. They are designed to mimic previous behaviour, so they are wrapped in the classic profile which does the above selection. Just copying old functions would not provide much benefit: The patch would make the point when acceleration is performed be more predictable, but not cause the pointer to cease jumping around that point. In other words, predictability depends on the profile a lot.

If you like to play with the functions, a few nice properties are:

  1. f(1) ~ 1

    • a fixed point, to enable exchanging functions
  2. continuous

    • very nice-to-have since we would otherwise throw away our estimate (probably causing jumps)
  3. continuous over derivative(s) or Cn

    • nice to have for smoothness.
  4. f'(minacceleration) = 0_

    • Ensures a soft kick-in of acceleration
  5. f( < 1) < 1

    • enables adaptive deceleration
      • although it is possible to hold all of the properies, included functions
    • only hold (1), (2), (5), and some also (3).
      • acceleration profiles are not meant to enforce constant or adaptive
    • deceleration. This is done in a separate step.
      • notwithstanding the former, functions might adapt to min_acceleration
    • in order to uphold (4).
      • In X an acceleration coefficient of 1 is unaccelerated, not 0.
    • It can be specified a a rational but we convert it to float.
      • usual control values should not make your fn go havoc. See ?PowerProfile()
    • for a measure one can take. If you want to do freaky new functions, you best put them in an own profile. Add your function to ?SetAccelerationProfile(), along with init/uninit code, and you're done.

Profiles are meant to be exchanged arbitrarily. There are some parts of the code assuming you use profiles solely for acceleration, and not to scale the device in general (or whatever else). Be nice and it will work. Probably.

While tempting, runtime-defined profiles are currently not possible. This may come with input properties.

Driver side API

In general, a driver does not need to act in any way. The acceleration is initialized during dix's ?InitValuatorClassDeviceStruct, user settings on acceleration are loaded when a driver calls xf86InitValuatorDefaults. These are already called by about every driver. In general, driver-specific interaction should be before xf86IVD, so user settings take precedence.

But of course it depends on what you want. This small API essentially lets the driver have its say on acceleration and scaling related issues. Proper use could improve the user experience, e.g. by avoiding to double-accelerated synaptics pads.

The relevant header is ptrveloc.h. Most interaction requires a ?DeviceVelocityPtr. Use

   GetDevicePredictableAccelData(DeviceIntPtr)

to obtain it (may return null). After xf86IVD, this may be used as an indicator on whether the predictable scheme is in effect.

Reporting rate

If a driver knows the anticipated reporting rate of the device in advance, it might choose to override the default ?VelocityScaling to improve velocity estimates:

  your_velocityPtr->corr_mul = 1000/rate;

This is especially worth the effort when the rate differs significantly from the default 100hz.

Scaling issues

Also, if your device has very high precision, you can postpone downscaling:

  your_velocityPtr->const_acceleration *= 0.5f;

This makes the full device precision available to guess velocity, and has potentially more benefits (like not distributing stale remainders all along the event loop). Plus, you don't have to do downscaling yourself :)

Caveat: Since there is no error correction on dix side, a driver whose device has some error in the signal (like synaptics, on my laptop at least) should downscale just enough for the error to become insignificant. Any surplus scaling might still be done in dix then.

Device-specific profile

A hardware driver may use its knowledge about the device to create a special acceleration profile. This can be installed using

  SetDeviceSpecificAccelerationProfile()

In order to make it the default for the device, simply call

   SetAccelerationProfile(velocityPtr, AccelProfileDeviceSpecific);

The user may always select it using profile 1.

Leave me alone

If you ultimately want no server-side acceleration to be performed, call

InitPointerAccelerationScheme(dev, PtrAccelNoOp). 

This disables constant deceleration too.

Velocity approximation

Device velocity, the only dynamic profile input, determines the amount of pointer acceleration. Getting that right thus is crucial to actually improve the state of affairs.

The velocity discussed here is:

(device_units / milliseconds) * velocity_scale / constant_deceleration

By pre-multiplying constant deceleration, the velocity is expected to be comparable across devices. This intends to ease fine-tuning profiles to your likes.

How it works

The algorithm keeps a record of (by default) 16 trackers. Each 'knows' a relative device position and when it was in effect. It's a short motion history. So when the user moves the mouse, the accumulated motion of up to 16 mickeys may be used in a distance by time velocity calculation.

Of course, there are many reasons to not use such a large amount. Using a common 100hz device, this would add ~ 1/8th seconds of delay. For this and other reasons, a few common-sense heuristics are applied. Most importantly, a tracker must be on a roughly linear motion segment towards the current position. Also, trackers mustn't be too old (?VelocityReset) or result in a velocity too much off the initial velocity (which, by default, is made up from the two latest mickeys).

Other things to ensure predictability

Continuous profiles

Acceleration functions, called profiles, are made continuous. That avoids sudden jumps in the amount of acceleration, thus enhancing intuitivity.

Continuous profiles are complemented by averaging profiles over the interval between the previous and current event's velocity estimate. This exploits the assumption that the real velocity also happened to fall in this range (and went somewhat linear). Not guaranteed, but a very likely case.

Mickeys are slightly flattened

This aims to improve evolving-speed movements such as 'painting' with a mouse typically requires. To not get in the way, it happens just below device precision, only if acceleration is actually performed (that is, the profile returned an acceleration greater 1), and not on insignificant mickeys. It can be turned off (Softening option).

It works simply by comparing previous and current mickey per-axis, and shifting current by 1/2 towards the previous - given they are different and the above conditions are met.

Example (for one axis only):

1 2 3 2 3 1

would become

1 1.5 2.5 2.5 2.5 1

That's arguably smoother.

Rationale for velocity scaling

Device motion deltas are being divided by delta milliseconds before being filtered, so they are about n times smaller than the raw input of a device reporting every n ms. Because the reporting rate is usually unknown in advance, this is the only way to scale up to 'normal' values.

Normalized values are required to sensibly compare against the threshold control, which is an integer. This fact is the main reason for ?VelocityScale to exist - the acceleration control is a rational, so it would in priciple bear enough precision by itself.

There is, however, one rather soft figure: A velocity estimate of 1.0 should correspond to a rather slow move, which you want to be neither accelerated nor decelerated. When untranslated, this corresponds to a move of 100 pixels per second. So for some devices, you may want to set velocity scaling or constant deceleration to keep the estimate in check.

Final notes

Interaction with synaptics / about any smart driver

I noticed two important things to consider when using the synaptics driver (or any other driver doing substantially more than decoding mickeys):

It seems synaptics driver implements its own acceleration, which can be switched off. Two ways of accelerating the pointer certainly don't do good. This can be accomplished by setting 2 options, '?MaxSpeed' and '?MinSpeed' to the same.

I chose ?MaxSpeed = ?MinSpeed = 1, which seemingly made the native touchpad resolution available to X, which in turn was far too responsive. I had to apply a ?ConstantDeceleration of 8 to work with it. This also makes the full device precision available to guess velocity.

A good compromise is thus to downscale only so much in the driver as to make the contained error insignificant, and leave the rest to dix.

As said, you have the choice on who scales down, and the driver could well be the better choice because it knows its stuff. But if it isn't better, choosing X to scale down will result in better velocity approximation, which may be advantageous.