Board presentation
http://www.youtube.com/watch?v=tiNU_zyDUJo
The X.org foundation supports development, rather than controlling it.
Foundation is now a US 501(c)(3) charity (thanks to SFLC). Board meetings open and public.
Book sprint to write developer docs has happened (one in March, one in September). Results will be published on wiki.
Xorg has just joined OIN patent pool.
The war chest is shrinking back to a sensible size; the foundation will soon need to fundraise. Current balance is around $85,000, run rate is around $30k/year. Can burn for 3 years before going bankrupt, but would prefer to find new funding sources. Old funding sources were big UNIX workstation vendors; foundation expects new funding sources to be smaller sums per contributor.
Infrastructure (shared with freedesktop.org) is a challenge, but being worked on; new sysadmin hired and will be working on web stability.
EVoC had 5 students this year; 4 succeeded. Question - does EVoC replace Google Summer of Code, or does X.org need to work out why we weren't accepted and get back into GSoC in future years?
Various IP issues (new logo etc). Also need to revise foundation bylaws. Note; foundation is not tied to X.org (25 years old!), and will continue with Wayland or any other graphical stack that subsumes X11.
Usual need to get more developers on board.
EVoC
http://www.youtube.com/watch?v=kOgh2EpKfSo
Endless Vacation of Code is inspired by Google Summer of Code, but funded by X.Org foundation. No time restrictions, but 3 month long projects (doesn't have to be summer).
Goal is to get students to become X.Org developers (productive output from EVoC is a bonus); covers the entire graphics stack, basically from drivers (OpenCL, DDX, OpenGL etc) up to the layer beneath the toolkits. It's not meant to provide a wage, just to let you work on X.Org instead of flipping burgers over a long vacation.
Puts extra load on mentors; honestly interested students are welcomed whole heartedly and are worth the extra load, but if you're not actually interested, please don't waste their time (EVoC is not free money for students). Mentor has to establish that student is competent to tackle project, and provide regular assistance to keep you on track. Board relies on mentor's judgement to evaluate students.
Graphics stack security
http://www.youtube.com/watch?v=hJpiJii44oo
Inspired by driver development book sprint. These guys aren't driver developers, and had to learn for this presentation.
Security is a trifecta - confidentiality, integrity, availability.
Users have expectations - e.g. cross-app communication is drag-and-drop or cut-and-paste, and therefore under my control. X breaks this - get the auth cookie, you have full access (can keylog, can screenshot as credit card number is typed). Isolation is therefore between users, not between one user's apps.
Problem - all apps grow, and have bugs. Any exploitable bug in any app lets you have full access to the user's desktop session.
So, confidentiality breach; keyloggers, screenshots at interesting moments. Integrity breach; draw over Firefox, so that the user can't tell that you've redirected them from their bank to a phishing site. Any application can act as a "virtual keyboard", and type - another integrity breach. Availability breach - any application can act as a screen locker. ClearGrab bug - a virtual keyboard app can kill a screen locker.
Current mitigations: XSELinux has fine grained access control, but normally deactivated as default distro policy doesn't confine users. Xephyr can provide virtual X screens - but coarse grained, so tricky to use right.
QubesOS uses VMs to solve security problem. PIGA-OS uses SELinux + XSELinux + PIGA-SYSTRANS daemon to solve it.
QubesOS groups apps into security domains. Each security domain is a Xen domU; X server provided for each domain has colourful marking to indicate security domain, inter-domain comms via a daemon in dom0, which implements MAC.
PIGA maps security domains to SELinux types, labels everything. SYSTRANS grants rights as needed, prompting users if this is a cross-domain move.
Wayland moves the security problem into the compositor - events unicast, so compositor has control needed to secure everything. However, compositor is then attack target - how to solve this? Privilege separation? We have an opportunity to fix things here, let's not waste it.
Driver/hardware story not too bad - CPU can't access arbitrary VRAM unless root. Opensource drivers not too bad at GPU isolation between users - mix of VM contexts and switching contexts on GPU, and command validation - scan commands and stop user doing things it shouldn't. Trades context switch cost for CPU time.
Goal is per-GPU-process isolation, just like we have per-CPU-process isolation on the main CPU. Think about information leakage (uncleared buffers), privileged plugins (e.g. to compositor) scanning address space (ASLR helps?) etc.
Solaris privilege separated X11
http://www.youtube.com/watch?v=hphjH2KYGAw
Solaris can run X without root privileges. Aim is to upstream as much of this as possible.
Solaris Xorg creates a named pipe for GDM to X11 comms, and runs Xorg as root. At login, GDM tells X (via pipe) about new user. X switches UID, but keeps root as its saved UID (POSIX-compliant) so it can become root again when it needs to (VT switch, regen).
Solaris has facilities to set device ownership/permissions on a VT and all associated on login; Xorg uses those facilities to ensure that it can open devices as the user, rather than becoming root.
Patches linked from slides. Side note - UEFI secure boot locks out most non-KMS drivers, so we have to work out what we do about hardware like MGA.
Dante: chasing performance
http://www.youtube.com/watch?v=PYGeXko_xf0
Oliver took Doom3 (8 years old) GPL3+ release, ported from OpenGL 1.x with extensions (depending on backend used) to EGL and OpenGL ES 2.0 (including clean-room creation of GLSL shaders to replace the ARBfp/ARBvp shaders), then tried to make the OpenGL ES 2.0 version perform well.
No good performance analysis tools for Mesa. "Best" available is intel_gpu_top for i965 driver hardware, but it's a top-like coarse-grained performance tools. Also not user-friendly, as represents load as hardware units (so you need to read HW docs to have a clue what's going on if it's showing an issue).
Every closed driver vendor has decent tools - AMD bought gDEBugger, and made it fglrx-only, other vendors have equivalent tools. Older version of gDEBugger sort-of works with Mesa, sometimes. Mesa has nothing.
Linux has perf infrastructure - lots of performance counters and sampling with source code annotation of results. Nice profiler; can we reuse for GPU performance? Hook DRM and intel_gpu_top data in, rely on existing tools for UI. Need userspace co-operation to get all the way (per-frame indication without GPU stall, so that profile can be interpreted per-frame).
Mesa's best infra so far is perf_debug - but not tied to ARB_debug_output, just to stdout. Also not frame boundary aware - so no way to tie perf_debug output to render operations.
Given all these hints, how do we cope with separate debugger processes?
Oliver found a GLX versus EGL bug. Others may exist.
Intel avoided the need for tools to fix Valve's L4D2 - they sent skilled engineers in instead - this does not scale, so tools are needed.
Comment at end from audience - apitrace is being developed into one profiling tool (although only runs on captured traces, not on live apps); work is underway on a shimGL that hooks live.
Phoronix benchmarking
No video. Contact MichaelLarabel for more information.
Michael would like developers to expose performance information (clocks, utilization %age, thermals) in a consistent fashion (e.g. standard sysfs files, with speeds in kHz). Please all behave the same, though.
Remaining discussion was Michael asking how Phoronix can be useful to driver devs, even though benchmarks are end-user targeted.
Some discussion of goals of benchmarks, agreement from floor that devs can write their own benchmarks to target individual bits of the driver. Big thing Michael can do is benchmark git snapshots regularly, and bisect performance regressions.
DRI3
http://www.youtube.com/watch?v=ZLnl6s60YL8
DRI2 fixed the SAREA disaster of DRI1. Experience (including implementing Wayland) has shown that DRI2 has pain points, so let's discuss DRI3 to fix these pain points.
Underlying issue is X server allocating buffers, and thus having to synchronise tightly with clients. DRI3 aims to make GLX buffer management similar to Wayland (clients allocate, "present frame" from glSwapBuffers becomes send buffer to X). Can we avoid having the server tell the client how big the buffer is (e.g. by relying on glViewport and providing an extension for apps where the bounding box of all viewports is not the buffer size)?
DMABUF may let us remove GEM handles from the pile - pass a private DMABUF fd via the UNIX socket, instead of a guessable GEM handle. Lots to consider here - YUV buffers, buffer sharing etc.
DRI3 can now flip when window size matches buffer size - blits only needed if size mismatch (so client/server disagree on window size, so something will go wrong anyway).
Identifying reusable buffers is also a challenge.
Discussion will be on mailing list - this talk is to get people thinking about it.
Board meeting
http://www.youtube.com/watch?v=J9QpJEwfLM0
New logo is needed. The board will fund a contest if someone steps up to run it, provided the board gets trademark, copyright on resulting logo.
Election cycle upcoming - 4 to replace as per bylaws.
Finances are now about what the board wants to run with - time to raise money to stay put.
Foundation no longer paying for hosting at MIT - now hosted at PSU, sharing infrastructure with freedesktop.org. Machines donated by Sun Microsystems.
New sysadmin starting this week, will start by working on web service reliability.