RSS Add a new post titled:

Java Sound on Linux

I’m often in the position of having my favorite Java program (AltosUI) unable to make any sounds. Here’s a history of the various adventures I’ve had.

Java and PulseAudio ALSA support

When we started playing with Java a few years ago, we discovered that if PulseAudio were enabled, Java wouldn’t make any sound. Presumably, that was because the ALSA emulation layer offered by PulseAudio wasn’t capable of supporting Java.

The fix for that was to make sure pulseaudio would never run. That’s harder than it seems; pulseaudio is like the living dead; rising from the grave every time you kill it. As it’s nearly impossible to install any desktop applications without gaining a bogus dependency on pulseaudio, the solution that works best is to make sure dpkg never manages to actually install the program with dpkg-divert:

# dpkg-divert --rename /usr/bin/pulseaudio

With this in place, Java was a happy camper for a long time.

Java and PulseAudio Native support

More recently, Java has apparently gained some native PulseAudio support in some fashion. Of course, I couldn’t actually get it to work, even after running the PulseAudio daemon but some kind Debian developer decided that sound should be broken by default for all Java applications and selected the PulseAudio back-end in the Java audio configuration file.

Fixing that involved learning about said Java audio configuration file and then applying patch to revert the Debian packaging damage.

$ cat /usr/lib/jvm/java-7-openjdk-amd64/jre/lib/sound.properties
...
#javax.sound.sampled.Clip=org.classpath.icedtea.pulseaudio.PulseAudioMixerProvider
#javax.sound.sampled.Port=org.classpath.icedtea.pulseaudio.PulseAudioMixerProvider
#javax.sound.sampled.SourceDataLine=org.classpath.icedtea.pulseaudio.PulseAudioMixerProvider
#javax.sound.sampled.TargetDataLine=org.classpath.icedtea.pulseaudio.PulseAudioMixerProvider

javax.sound.sampled.Clip=com.sun.media.sound.DirectAudioDeviceProvider
javax.sound.sampled.Port=com.sun.media.sound.PortMixerProvider
javax.sound.sampled.SourceDataLine=com.sun.media.sound.DirectAudioDeviceProvider
javax.sound.sampled.TargetDataLine=com.sun.media.sound.DirectAudioDeviceProvider

You can see the PulseAudio mistakes at the top of that listing, with the corrected native interface settings at the bottom.

Java and single-open ALSA drivers

It used to be that ALSA drivers could support multiple applications having the device open at the same time. Those with hardware mixing would use that to merge the streams together; those without hardware mixing might do that in the kernel itself. While the latter is probably not a great plan, it did make ALSA a lot more friendly to users.

My new laptop is not friendly, and returns EBUSY when you try to open the PCM device more than once.

After downloading the jdk and alsa library sources, I figured out that Java was trying to open the PCM device multiple times when using the standard Java sound API in the simplest possible way. I thought I was going to have to fix Java, when I figured out that ALSA provides user-space mixing with the ‘dmix’ plugin. I enabled that on my machine and now all was well.

$ cat /etc/asound.conf
pcm.!default {
    type plug
    slave.pcm "dmixer"
}

pcm.dmixer  {
    type dmix
    ipc_key 1024
    slave {
        pcm "hw:1,0"
        period_time 0
        period_size 1024
        buffer_size 4096
        rate 44100
    }
    bindings {
        0 0
        1 1
    }
}

ctl.dmixer {
    type hw
    card 1
}

ctl.!default {
    type hw
    card 1
}

As you can see, my sound card is not number 0, it’s number 1, so if your card is a different number, you’ll have to adapt as necessary.

Posted Sat Apr 5 22:30:55 2014 Tags:

Core Rendering with Glamor

I’ve hacked up the intel driver to bypass all of the UXA paths when Glamor is enabled so I’m currently running an X server that uses only Glamor for all rendering. There are still too many fall backs, and performance for some operations is not what I’d like, but it’s entirely usable. It supports DRI3, so I even have GL applications running.

Core Rendering Status

I’ve continued to focus on getting the core X protocol rendering operations complete and correct; those remain a critical part of many X applications and are a poor match for GL. At this point, I’ve got accelerated versions of the basic spans functions, filled rectangles, text and copies.

GL and Scrolling

OpenGL has been on a many-year vendetta against one of the most common 2D accelerated operations — copying data within the same object, even when that operation overlaps itself. This used to be the most performance-critical operation in X; it was used for scrolling your terminal windows and when moving windows around on the screen.

Reviewing the OpenGL 3.x spec, Eric and I both read the glCopyPixels specification as clearly requiring correct semantics for overlapping copy operations — it says that the operation must be equivalent to reading the pixels and then writing the pixels. My CopyArea acceleration thus uses this path for the self-copy case. However, the ARB decided that having a well defined blt operation was too nice to the users, so the current 4.4 specification adds explicit language to assert that this is not well defined anymore (even in the face of the existing language which is pretty darn unambiguous).

I suspect we’ll end up creating an extension that offers what we need here; applications are unlikely to stop scrolling stuff around, and GPUs (at least Intel) will continue to do what we want. This is the kind of thing that makes GL maddening for 2D graphics — the GPU does what we want, and the GL doesn’t let us get at it.

For implementations not capable of supporting the required semantic, someone will presumably need to write code that creates a temporary copy of the data.

PBOs for fall backs

For operations which Glamor can’t manage, we need to fall back to using a software solution. Direct-to-hardware acceleration architectures do this by simply mapping the underlying GPU object to the CPU. GL doesn’t provide this access, and it’s probably a good thing as such access needs to be carefully synchronized with GPU access, and attempting to access tiled GPU objects with the CPU require either piles of CPU code to ‘de-tile’ accesses (ala wfb), or special hardware detilers (like the Intel GTT).

However, GL does provide a fairly nice abstraction called pixel buffer objects (PBOs) which work to speed access to GPU data from the CPU.

The fallback code allocates a PBO for each relevant X drawable, asks GL to copy pixels in, and then calls fb, with the drawable now referencing the temporary buffer. On the way back out, any potentially modified pixels are copied back through GL and the PBOs are freed.

This turns out to be dramatically faster than malloc’ing temporary buffers as it allows the GL to allocate memory that it likes, and for it to manage the data upload and buffer destruction asynchronously.

Because X pixmaps can contain many X windows (the root pixmap being the most obvious example), they are often significantly larger than the actual rendering target area. As an optimization, the code only copies data from the relevant area of the pixmap, saving considerable time as a result. There’s even an interface which further restricts that to a subset of the target drawable which the Composite function uses.

Using Scissoring for Clipping

The GL scissor operation provides a single clipping rectangle. X provides a list of rectangles to clip to. There are two obvious ways to perform clipping here — either perform all clipping in software, or hand each X clipping rectangle in turn to GL and re-execute the entire rendering operation for each rectangle.

You’d think that the former plan would be the obvious choice; clearly re-executing the entire rendering operation potentially many times is going to take a lot of time in the GPU.

However, the reality is that most X drawing occurs under a single clipping rectangle. Accelerating this common case by using the hardware clipper provides enough benefit that we definitely want to use it when it works. We could duplicate all of the rendering paths and perform CPU-based clipping when the number of rectangles was above some threshold, but the additional code complexity isn’t obviously worth the effort, given how infrequently it will be used. So I haven’t bothered. Most operations look like this:

Allocate VBO space for data

Fill VBO with X primitives

loop over clip rects {
    glScissor()
    glDrawArrays()
}

This obviously out-sources as much of the problem as possible to the GL library, reducing the CPU time spent in glamor to a minimum.

A Peek at Some Code

With all of these changes in place, drawing something like a list of rectangles becomes a fairly simple piece of code:

First, make sure the program we want to use is available and can be used with our GC configuration:

prog = glamor_use_program_fill(pixmap, gc,
                               &glamor_priv->poly_fill_rect_program,
                               &glamor_facet_polyfillrect);

if (!prog)
    goto bail_ctx;

Next, allocate the VBO space and copy all of the X data into it. Note that the data transfer is simply ‘memcpy’ here — that’s because we break the X objects apart in the vertex shader using instancing, avoiding the CPU overhead of computing four corner coordinates.

/* Set up the vertex buffers for the points */

v = glamor_get_vbo_space(drawable->pScreen, nrect * (4 * sizeof (GLshort)), &vbo_offset);

glEnableVertexAttribArray(GLAMOR_VERTEX_POS);
glVertexAttribDivisor(GLAMOR_VERTEX_POS, 1);
glVertexAttribPointer(GLAMOR_VERTEX_POS, 4, GL_SHORT, GL_FALSE,
                      4 * sizeof (GLshort), vbo_offset);

memcpy(v, prect, nrect * sizeof (xRectangle));

glamor_put_vbo_space(screen);

Finally, loop over the pixmap tile fragments, and then over the clip list, selecting the drawing target and painting the rectangles:

glEnable(GL_SCISSOR_TEST);

glamor_pixmap_loop(pixmap_priv, box_x, box_y) {
    int nbox = RegionNumRects(gc->pCompositeClip);
    BoxPtr box = RegionRects(gc->pCompositeClip);

    glamor_set_destination_drawable(drawable, box_x, box_y, TRUE, FALSE, prog->matrix_uniform, &off_x, &off_y);

    while (nbox--) {
        glScissor(box->x1 + off_x,
                  box->y1 + off_y,
                  box->x2 - box->x1,
                  box->y2 - box->y1);
        box++;
        glDrawArraysInstanced(GL_TRIANGLE_STRIP, 0, 4, nrect);
    }
}

GL texture size limits

X pixmaps use 16 bit dimensions for width and height, allowing them to be up to 65536 x 65536 pixels. Because the X coordinate space is signed, only a quarter of this space is actually useful, which makes the useful size of X pixmaps only 32767 x 32767. This is still larger than most GL implementations offer as a maximum texture size though, and while it would be nice to just say ‘we don’t allow pixmaps larger than GL textures’, the reality is that many applications expect to be able to allocate such pixmaps today, largely to hold the ever increasing size of digital photographs.

Glamor has always supported large X pixmaps; it does this by splitting them up into tiles, each of which is no larger than the largest texture supported by the driver. What I’ve added to Glamor is some simple macros that walk over the array of tiles, making it easy for the rendering code to support large pixmaps without needing any special case code.

Glamor also had some simple testing support — you can compile the code to ignore the system-provided maximum texture size and supply your own value. This code had gone stale, and couldn’t work as there were parts of the code for which tiling support just doesn’t make sense, like the glyph cache, or the X scanout buffer. I fixed things so that you could leave those special cases as solitary large tiles while breaking up all other pixmaps into tiles no larger than 32 pixels square.

I hope to remove the single-tile case and leave the code supporting only the multiple-tile case; we have to have the latter code, and so having the single-tile code around simply increases our code size for not obvious benefit.

Getting accelerated copies between tiled pixmaps added a new coordinate system to the mix and took a few hours of fussing until it was working.

Rebasing Many (many) Times

I’m sure most of us remember the days before git; changes were often monolithic, and the notion of changing how the changes were made for the sake of clarity never occurred to anyone. It used to be that the final code was the only interesting artifact; how you got there didn’t matter to anyone. Things are different today; I probably spend a third of my development time communicating how the code should change with other developers by changing the sequence of patches that are to be applied.

In the case of Glamor, I’ve now got a set of 28 patches. The first few are fixes outside of the glamor tree that make the rest of the server work better. Then there are a few general glamor infrastructure additions. After that, each core operation is replaced, one a at a time. Finally, a bit of stale code is removed. By sequencing things in a logical fashion, I hope to make review of the code easier, which should mean that people will spend less time trying to figure out what I did and be able to spend more time figuring out if what I did is reasonable and correct.

Supporting Older Versions of GL

All of the new code uses vertex instancing to move coordinate computation from the CPU to the GPU. I’m also pulling textures apart using integer operations. Right now, we should correctly fall back to software for older hardware, but it would probably be nicer to just fall back to simpler GL instead. Unless everyone decides to go buy hardware with new enough GL driver support, someone is going to need to write simplified code paths for glamor.

If you’ve got such hardware, and are interested in making it work well, please take this as an opportunity to help yourself and others.

Near-term Glamor Goals

I’m pretty sure we’ll have the code in good enough shape to merge before the window closes for X server 1.16. Eric is in charge of the glamor tree, so it’s up to him when stuff is pulled in. He and Markus Wick have also been generating code and reviewing stuff, but we could always use additional testing and review to make the code as good as possible before the merge window closes.

Markus has expressed an interest in working on Glamor as a part of the X.org summer of code this year; there’s clearly plenty of work to do here, Eric and I haven’t touched the render acceleration stuff at all, and that code could definitely use some updating to use more modern GL features.

If that works as well as the core rendering code changes, then we can look forward to a Glamor which offers GPU-limited performance for classic X applications, without requiring GPU-specific drivers for every generation of every chip.

Posted Sat Mar 22 01:20:34 2014 Tags:

Brief Glamor Hacks

Eric Anholt started writing Glamor a few years ago. The goal was to provide credible 2D acceleration based solely on the OpenGL API, in particular, to implement the X drawing primitives, both core and Render extension, without any GPU-specific code. When he started, the thinking was that fixed-function devices were still relevant, so that original code didn’t insist upon “modern” OpenGL features like GLSL shaders. That made the code less efficient and hard to write.

Glamor used to be a side-project within the X world; seen as something that really wasn’t very useful; something that any credible 2D driver would replace with custom highly-optimized GPU-specific code. Eric and I both hoped that Glamor would turn into something credible and that we’d be able to eliminate all of the horror-show GPU-specific code in every driver for drawing X text, rectangles and composited images. That hadn’t happened though, until now.

Fast forward to the last six months. Eric has spent a bunch of time cleaning up Glamor internals, and in fact he’s had it merged into the core X server for version 1.16 which will be coming up this July. Within the Glamor code base, he’s been cleaning some internal structures up and making life more tolerable for Glamor developers.

Using libepoxy

A big part of the cleanup was a transition all of the extension function calls to use his other new project, libepoxy, which provides a sane, consistent and performant API to OpenGL extensions for Linux, Mac OS and Windows. That library is awesome, and you should use it for everything you do with OpenGL because not using it is like pounding nails into your head. Or eating non-tasty pie.

Using VBOs in Glamor

One thing he recently cleaned up was how to deal with VBOs during X operations. VBOs are absolutely essential to modern OpenGL applications; they’re really the only way to efficiently pass vertex data from application to the GPU. And, every other mechanism is deprecated by the ARB as not a part of the blessed ‘core context’.

Glamor provides a simple way of getting some VBO space, dumping data into it, and then using it through two wrapping calls which you use along with glVertexAttribPointer as follows:

pointer = glamor_get_vbo_space(screen, size, &offset);
glVertexAttribPointer(attribute_location, count, type,
              GL_FALSE, stride, offset);
memcpy(pointer, data, size);
glamor_put_vbo_space(screen);

glamor_get_vbo_space allocates the specified amount of VBO space and returns a pointer to that along with an ‘offset’, which is suitable to pass to glVertexAttribPointer. You dump your data into the returned pointer, call glamor_put_vbo_space and you’re all done.

Actually Drawing Stuff

At the same time, Eric has been optimizing some of the existing rendering code. But, all of it is still frankly terrible. Our dream of credible 2D graphics through OpenGL just wasn’t being realized at all.

On Monday, I decided that I should go play in Glamor for a few days, both to hack up some simple rendering paths and to familiarize myself with the insides of Glamor as I’m getting asked to review piles of patches for it, and not understanding a code base is a great way to help introduce serious bugs during review.

I started with the core text operations. Not because they’re particularly relevant these days as most applications draw text with the Render extension to provide better looking results, but instead because they’re often one of the hardest things to do efficiently with a heavy weight GPU interface, and OpenGL can be amazingly heavy weight if you let it.

Eric spent a bunch of time optimizing the existing text code to try and make it faster, but at the bottom, it actually draws each lit pixel as a tiny GL_POINT object by sending a separate x/y vertex value to the GPU (using the above VBO interface). This code walks the array of bits in the font and checking each one to see if it is lit, then checking if the lit pixel is within the clip region and only then adding the coordinates of the lit pixel to the VBO. The amazing thing is that even with all of this CPU and GPU work, the venerable 6x13 font is drawn at an astonishing 3.2 million glyphs per second. Of course, pure software draws text at 9.3 million glyphs per second.

I suspected that a more efficient implementation might be able to draw text a bit faster, so I decided to just start from scratch with a new GL-based core X text drawing function. The plan was pretty simple:

  1. Dump all glyphs in the font into a texture. Store them in 1bpp format to minimize memory consumption.

  2. Place raw (integer) glyph coordinates into the VBO. Place four coordinates for each and draw a GL_QUAD for each glyph.

  3. Transform the glyph coordinates into the usual GL range (-1..1) in the vertex shader.

  4. Fetch a suitable byte from the glyph texture, extract a single bit and then either draw a solid color or discard the fragment.

This makes the X server code surprisingly simple; it computes integer coordinates for the glyph destination and glyph image source and writes those to the VBO. When all of the glyphs are in the VBO, it just calls glDrawArrays(GL_QUADS, 0, 4 * count). The results were “encouraging”:

1: fb-text.perf
2: glamor-text.perf
3: keith-text.perf

       1                 2                           3                 Operation
------------   -------------------------   -------------------------   -------------------------
   9300000.0      3160000.0 (     0.340)     18000000.0 (     1.935)   Char in 80-char line (6x13) 
   8700000.0      2850000.0 (     0.328)     16500000.0 (     1.897)   Char in 70-char line (8x13) 
   6560000.0      2380000.0 (     0.363)     11900000.0 (     1.814)   Char in 60-char line (9x15) 
   2150000.0       700000.0 (     0.326)      7710000.0 (     3.586)   Char16 in 40-char line (k14) 
    894000.0       283000.0 (     0.317)      4500000.0 (     5.034)   Char16 in 23-char line (k24) 
   9170000.0      4400000.0 (     0.480)     17300000.0 (     1.887)   Char in 80-char line (TR 10) 
   3080000.0      1090000.0 (     0.354)      7810000.0 (     2.536)   Char in 30-char line (TR 24) 
   6690000.0      2640000.0 (     0.395)      5180000.0 (     0.774)   Char in 20/40/20 line (6x13, TR 10) 
   1160000.0       351000.0 (     0.303)      2080000.0 (     1.793)   Char16 in 7/14/7 line (k14, k24) 
   8310000.0      2880000.0 (     0.347)     15600000.0 (     1.877)   Char in 80-char image line (6x13) 
   7510000.0      2550000.0 (     0.340)     12900000.0 (     1.718)   Char in 70-char image line (8x13) 
   5650000.0      2090000.0 (     0.370)     11400000.0 (     2.018)   Char in 60-char image line (9x15) 
   2000000.0       670000.0 (     0.335)      7780000.0 (     3.890)   Char16 in 40-char image line (k14) 
    823000.0       270000.0 (     0.328)      4470000.0 (     5.431)   Char16 in 23-char image line (k24) 
   8500000.0      3710000.0 (     0.436)      8250000.0 (     0.971)   Char in 80-char image line (TR 10) 
   2620000.0       983000.0 (     0.375)      3650000.0 (     1.393)   Char in 30-char image line (TR 24)

This is our old friend x11perfcomp, but slightly adjusted for a modern reality where you really do end up drawing billions of objects (hence the wider columns). This table lists the performance for drawing a range of different fonts in both poly text and image text variants. The first column is for Xephyr using software (fb) rendering, the second is for the existing Glamor GL_POINT based code and the third is the latest GL_QUAD based code.

As you can see, drawing points for every lit pixel in a glyph is surprisingly fast, but only about 1/3 the speed of software for essentially any size glyph. By minimizing the use of the CPU and pushing piles of work into the GPU, we manage to increase the speed of most of the operations, with larger glyphs improving significantly more than smaller glyphs.

Now, you ask how much code this involved. And, I can truthfully say that it was a very small amount to write:

 Makefile.am        |    2 
 glamor.c           |    5 
 glamor_core.c      |    8 
 glamor_font.c      |  181 ++++++++++++++++++++
 glamor_font.h      |   50 +++++
 glamor_priv.h      |   26 ++
 glamor_text.c      |  472 +++++++++++++++++++++++++++++++++++++++++++++++++++++
 glamor_transform.c |    2 
 8 files changed, 741 insertions(+), 5 deletions(-)

Let’s Start At The Very Beginning

The results of optimizing text encouraged me to start at the top of x11perf and see what progress I could make. In particular, looking at the current Glamor code, I noticed that it did all of the vertex transformation with the CPU. That makes no sense at all for any GPU built in the last decade; they’ve got massive numbers of transistors dedicated to performing precisely this kind of operation. So, I decided to see what I could do with PolyPoint.

PolyPoint is absolutely brutal on any GPU; you have to pass it two coordinates for each pixel, and so the very best you can do is send it 32 bits, or precisely the same amount of data needed to actually draw a pixel on the frame buffer. With this in mind, one expects that about the best you can do compared with software is tie. Of course, the CPU version is actually computing an address and clipping, but those are all easily buried in the cost of actually storing a pixel.

In any case, the results of this little exercise are pretty close to a tie — the CPU draws 190,000,000 dots per second and the GPU draws 189,000,000 dots per second. Looking at the vertex and fragment shaders generated by the compiler, it’s clear that there’s room for improvement.

The fragment shader is simply pulling the constant pixel color from a uniform and assigning it to the fragment color in this the simplest of all possible shaders:

uniform vec4 color;
void main()
{
       gl_FragColor = color;
};

This generates five instructions:

Native code for point fragment shader 7 (SIMD8 dispatch):
   START B0
   FB write target 0
0x00000000: mov(8)          g113<1>F        g2<0,1,0>F                      { align1 WE_normal 1Q };
0x00000010: mov(8)          g114<1>F        g2.1<0,1,0>F                    { align1 WE_normal 1Q };
0x00000020: mov(8)          g115<1>F        g2.2<0,1,0>F                    { align1 WE_normal 1Q };
0x00000030: mov(8)          g116<1>F        g2.3<0,1,0>F                    { align1 WE_normal 1Q };
0x00000040: sendc(8)        null            g113<8,8,1>F
                render ( RT write, 0, 4, 12) mlen 4 rlen 0      { align1 WE_normal 1Q EOT };
   END B0

As this pattern is actually pretty common, it turns out there’s a single instruction that can replace all four of the moves. That should actually make a significant difference in the run time of this shader, and this shader runs once for every single pixel.

The vertex shader has some similar optimization opportunities, but it only runs once for every 8 pixels — with the SIMD format flipped around, the vertex shader can compute 8 vertices in parallel, so it ends up executing 8 times less often. It’s got some redundant moves, which could be optimized by improving the copy propagation analysis code in the compiler.

Of course, improving the compiler to make these cases run faster will probably make a lot of other applications run faster too, so it’s probably worth doing at some point.

Again, the amount of code necessary to add this path was tiny:

 Makefile.am        |    1 
 glamor.c           |    2 
 glamor_polyops.c   |  116 ++++++++++++++++++++++++++++++++++++++++++++++++++--
 glamor_priv.h      |    8 +++
 glamor_transform.c |  118 +++++++++++++++++++++++++++++++++++++++++++++++++++++
 glamor_transform.h |   51 ++++++++++++++++++++++
 6 files changed, 292 insertions(+), 4 deletions(-)

Discussion of Results

These two cases, text and points, are probably the hardest operations to accelerate with a GPU and yet a small amount of OpenGL code was able to meet or beat software easily. The advantage of this work over traditional GPU 2D acceleration implementations should be pretty clear — this same code should work well on any GPU which offers a reasonable OpenGL implementation. That means everyone shares the benefits of this code, and everyone can contribute to making 2D operations faster.

All of these measurements were actually done using Xephyr, which offers a testing environment unlike any I’ve ever had — build and test hardware acceleration code within a nested X server, debugging it in a windowed environment on a single machine. Here’s how I’m running it:

$ ./Xephyr  -glamor :1 -schedMax 2000 -screen 1024x768 -retro

The one bit of magic here is the ‘-schedMax 2000’ flag, which causes Xephyr to update the screen less often when applications are very busy and serves to reduce the overhead of screen updates while running x11perf.

Future Work

Having managed to accelerate 17 of the 392 operations in x11perf, it’s pretty clear that I could spend a bunch of time just stepping through each of the remaining ones and working on them. Before doing that, we want to try and work out some general principles about how to handle core X fill styles. Moving all of the stipple and tile computation to the GPU will help reduce the amount of code necessary to fill rectangles and spans, along with improving performance, assuming the above exercise generalizes to other primitives.

Getting and Testing the Code

Most of the changes here are from Eric’s glamor-server branch:

git://people.freedesktop.org/~anholt/xserver glamor-server

The two patches shown above, along with a pair of simple clean up patches that I’ve written this week are available here:

git://people.freedesktop.org/~keithp/xserver glamor-server

Of course, as this now uses libepoxy, you’ll need to fetch, build and install that before trying to compile this X server.

Because you can try all of this out in Xephyr, it’s easy to download and build this X server and then run it right on top of your current driver stack inside of X. I’d really like to hear from people with Radeon or nVidia hardware to know whether the code works, and how it compares with fb on the same machine, which you get when you elide the ‘-glamor’ argument from the example Xephyr command line above.

Posted Fri Mar 7 01:12:28 2014 Tags:

MicroPeak Approved for NAR Contests

The NAR Contest Board has approved MicroPeak for use in contests requiring a barometric altimeter starting on the 1st of April, 2014. You can read the announcement message on the contestRoc Yahoo message board here:

Contest Board Approves New Altimeter

The message was sent out on the 30th of January, but there is a 90 day waiting period after the announcement has been made before you can use MicroPeak in a contest, so the first date approved for contest flights is April 1. After that date, you should see MicroPeak appear in Appendix G of the pink book, which lists the altimeters approved for contest use

Thanks much to the NAR contest board and all of the fliers who helped get MicroPeak ready for this!

Posted Mon Feb 17 00:45:17 2014 Tags:

AltOS 1.3.2 — Bug fixes and improved APRS support

Bdale and I are pleased to announce the release of AltOS version 1.3.2.

AltOS is the core of the software for all of the Altus Metrum products. It consists of firmware for our cc1111, STM32L151, LPC11U14 and ATtiny85 based electronics and Java-based ground station software.

This is a minor release of AltOS, including bug fixes for TeleMega, TeleMetrum v2.0 and AltosUI .

AltOS Firmware — GPS Satellite reporting and APRS improved

Firmware version 1.3.1 has a bug on TeleMega when it has data from more than 12 GPS satellites. This causes buffer overruns within the firmware. 1.3.2 limits the number of reported satellites to 12.

APRS now continues to send the last known good GPS position, and reports GPS lock status and number of sats in view in the APRS comment field, along with the battery and igniter voltages.

AltosUI — TeleMega GPS Satellite, GPS max height and Fire Igniters

AltosUI was crashing when TeleMega reported that it had data from more than 12 satellites. While the TeleMega firmware has been fixed to never do that, AltosUI also has a fix in case you fly a TeleMega board without updated firmware.

GPS max height is now displayed in the flight statistics. As the u-Blox GPS chips now provide accurate altitude information, we’ve added the maximum height as computed by GPS here.

Fire Igniters now uses the letters A through D to label the extra TeleMega pyro channels instead of the numbers 0-3.

Posted Sat Feb 15 02:45:39 2014 Tags:

Missing FOSDEM

I’m afraid Eric and I won’t be at FOSDEM this weekend; our flight got canceled, and the backup they offered would have gotten us there late Saturday night. It seemed crazy to fly to Brussels for one day of FOSDEM, so we decided to just stay home and get some work done.

Sorry to be missing all of the fabulous FOSDEM adventures and getting to see all of the fun people who attend one of the best conferences around. Hope everyone has a great time, and finds only the best chocolates.

Posted Fri Jan 31 18:22:54 2014 Tags:

AltOS 1.3.1 — Bug fixes and improved APRS support

Bdale and I are pleased to announce the release of AltOS version 1.3.1.

AltOS is the core of the software for all of the Altus Metrum products. It consists of firmware for our cc1111, STM32L151, LPC11U14 and ATtiny85 based electronics and Java-based ground station software.

This is a minor release of AltOS, including bug fixes for TeleMega, TeleMetrum v2.0 and AltosUI .

AltOS Firmware — Antenna down fixed and APRS improved

Firmware version 1.3 has a bug in the support for operating the flight computer with the antenna facing downwards; the accelerometer calibration data would be incorrect. Furthermore, the accelerometer self-test routine would be confused if the flight computer were moved in the first second after power on. The firmware now simply re-tries the self-test several times.

I went out and bought a “real” APRS radio, the Yaesu FT1D to replace my venerable VX 7R. With this in hand, I changed our APRS support to use the compressed position format, which takes fewer bytes to send, offers increased resolution and includes altitude data. I took the altitude data out of the comment field and replaced that with battery and igniter voltages. This makes APRS reasonably useful in pad mode to monitor the state of the flight computer before boost.

Anyone with a TeleMega should update to the new firmware eventually, although there aren’t any critical bug fixes here, unless you’re trying to operate the device with the antenna pointing downwards.

AltosUI — TeleMega support and offline map loading improved.

I added all of the new TeleMega sensor data as possible elements in the graph. This lets you see roll rates and horizontal acceleration values for the whole flight. The ‘Fire Igniter’ dialog now lists all of the TeleMega extra pyro channels so you can play with those on the ground as well.

Our offline satellite images are downloaded from Google, but they restrict us to reading 50 images per minute. When we tried to download a 9x9 grid of images to save for later use on the flight line, Google would stop feeding us maps after the first 50. You’d have to poke the button a second time to try and fill in the missing images. We fixed this by just limiting how fast we load maps, and now we can reliably load an 11x11 grid of images.

Of course, there are also a few minor bug fixes, so it’s probably worth updating even if the above issues don’t affect you.

Posted Wed Jan 22 20:39:01 2014 Tags:

X bitmaps vs OpenGL

Of course, you all know that X started life as a monochrome window system for the VS100. Back then, bitmaps and rasterops were cool; you could do all kinds of things simple bit operations. Things changed, and eventually X bitmaps became useful only off-screen for clip masks, text and stipples. These days, you’ll rarely see anyone using a bitmap — everything we used to use bitmaps for has gone all alpha-values on us.

In OpenGL, there aren’t any bitmaps. About the most ‘bitmap-like’ object you’ll find is an A8 texture, holding 8 bits of alpha value for each pixel. There’s no way to draw to or texture from anything where each pixel is represented as a single bit.

So, as Eric goes about improving Glamor, he got a bit stuck with bitmaps. We could either:

  • Support them only on the CPU, uploading copies as A8 textures when used as a source in conjunction with GPU objects.

  • Support them as 1bpp on the CPU and A8 on the GPU, doing fancy tracking between the two objects when rendering occurred.

  • Fix the CPU code to deal with bitmaps stored 8 bits per pixel.

I thought the latter choice would be the best plan — directly share the same object between CPU and GPU rendering, avoiding all reformatting as things move around in the server.

Why is this non-trivial?

Usually, you can flip formats around with reckless abandon in X, it has separate bits-per-pixel and depth values everywhere. That’s how we do things like 32 bits-per-pixel RGB surfaces; we just report them as depth 24 and everyone is happy.

Bitmaps are special though. The X protocol has separate (and overly complicated) image formats for single bit images, and those have to be packed 1 bit per pixel. Within the server, bitmaps are used for drawing core text, stippling and creating clip masks. They’re the ‘lingua franca’ of image formats, allowing you to translate between depths by pulling a single “plane” out of a source image and painting it into a destination of arbitrary depth.

As such, the frame buffer rendering code in the server assumes that bitmaps are always 1 bit per pixel. Given that it must deal with 1bpp images on the wire, and given the history of X, it certainly made sense at the time to simplify the code with this assumption.

A blast from the past

I’d actually looked into doing this before. As part of the DECstation line, DEC built the MX monochrome frame buffer board, and to save money, they actually created it by populating a single bit in each byte rather than packed into 8 pixels per byte. I have this vague memory that they were able to use only 4 memory chips this way.

The original X driver for this exposed a depth-8 static color format because of the assumptions made by the (then current) CFB code about bitmap formats.

Jim Gettys wandered over to MIT while the MX frame buffer was in design and asked how hard it would be to support it as a monochrome device instead of the depth-8 static color format. At the time, fixing CFB would have been a huge effort, and there wasn’t any infrastructure for separating the wire image format from the internal pixmap format. So, we gave up and left things looking weird to applications.

Hacking FB

These days, the amount of frame buffer code in the X server is dramatically less; CFB and MFB have been replaced with the smaller (and more general) FB code. It turns out that the number of places which need to deal with individual bits in a bitmap are now limited to a few stippling and CopyPlane functions. And, in those functions, the number of individual read operations from the bitmap are few in number. Each of those fetches looked like:

bits = READ(src++)

All I needed to do was make this operation return 32 bits by pulling one bit from each of 8 separate 32-bit chunks and merge them together. The first thing to do was to pad the pixmap out to a 32 byte boundary, rather than a 32 bit boundary. This ensured that I would always be able to fetch data from the bitmap in 8 32-bit chunks. Next, I simply replaced the READ macro call with:

    bits = fb_stip_read(src, srcBpp);
    src += srcBpp;

The new fb_stip_read function checks srcBpp and packs things together for 8bpp images:

/*
 * Given a depth 1, 8bpp stipple, pull out
 * a full FbStip worth of packed bits
 */
static inline FbStip
fb_pack_stip_8_1(FbStip *bits) {
    FbStip      r = 0;
    int         i;

    for (i = 0; i < 8; i++) {
        FbStip  b;
        uint8_t p;

        b = FB_READ(bits++);
#if BITMAP_BIT_ORDER == LSBFirst
        p = (b & 1) | ((b >> 7) & 2) | ((b >> 14) & 4) | ((b >> 21) & 8);
        r |= p << (i << 2);
#else
        p = (b & 0x80000000) | ((b << 7) & 0x40000000) |
            ((b << 14) & 0x20000000) | ((b << 21) & 0x10000000);
        r |= p >> (i << 2);
#endif
    }
    return r;
}

/*
 * Return packed stipple bits from src
 */
static inline FbStip
fb_stip_read(FbStip *bits, int bpp)
{
    switch (bpp) {
    default:
        return FB_READ(bits);
    case 8:
        return fb_pack_stip_8_1(bits);
    }
}

It turns into a fairly hefty amount of code, but the number of places this ends up being used is pretty small, so it shouldn’t increase the size of the server by much. Of course, I’ve only tested the LSBFirst case, but I think the MSBFirst code is correct.

I’ve sent the patches to do this to the xorg-devel mailing list, and they’re also on the ‘depth1’ branch in my repository

git://people.freedesktop.org/~keithp/xserver.git

Testing

Eric also hacked up the test suite to be runnable by piglit, and I’ve run it in that mode against these changes. I had made a few mistakes, and the test suite caught them nicely. Let’s hope this adventure helps Eric out as he continues to improve Glamor.

Posted Mon Jan 13 18:52:30 2014 Tags:

Pixmap ID Lifetimes under Present Redirection (Part Deux)

I recently wrote about pixmap content and ID lifetimes. I think the pixmap content lifetime stuff was useful, but the pixmap ID stuff was not quite right. I’ve just copied the section from the previous post and will update it.

PresentRedirect pixmap ID lifetimes (reprise)

A compositing manager will want to know the lifetime of the pixmaps delivered in PresentRedirectNotify events to clean up whatever local data it has associated with it. For instance, GL compositing managers will construct textures for each pixmap that need to be destroyed when the pixmap disappears.

Present encourages pixmap destruction

The PresentPixmap request says:

PresentRegion holds a reference to ‘pixmap’ until the presentation occurs, so ‘pixmap’ may be immediately freed after the request executes, even if that is before the presentation occurs.

Breaking this when doing Present redirection seems like a bad idea; it’s a very common pattern for existing 2D applications.

New pixmap IDs for everyone

Because pixmap IDs for present contents are owned by the source application (and may be re-used immediately when freed), Present can’t use that ID in the PresentRedirectNotify event. Instead, it must allocate a new ID and send that instead. The server has it’s own XID space to use, and so it can allocate one of those and bind it to the same underlying pixmap; the pixmap will not actually be destroyed until both the application XID and the server XID are freed.

The compositing manager will receive this new XID, and is responsible for freeing it when it doesn’t need it any longer. Of course, if the compositing manager exits, all of the XIDs passed to it will automatically be freed.

I considered allocating client-specific XIDs for this purpose; the X server already has a mechanism for allocating internal IDs for various things that are to be destroyed at client shut-down time. Those XIDs have bit 30 set, and are technically invalid XIDs as X specifies that the top three bits of every XID will be zero. However, the cost of using a server ID instead (which is a valid XID) is small, and it’s always nice to not intentionally break the X protocol (even as we continue to take advantage of “accidental” breakages).

(Reserving the top three bits of XIDs and insisting that they always be zero was a nod to the common practice of weakly typed languages like Lisp. In these languages, 32-bit object references were typed by using few tag bits (2-3) within the value. Most of these were just pointers to data, but small integers, that fit in the remaining bits (29-30), could be constructed without allocating any memory. By making XIDs only 29 bits, these languages could be assured that all XIDs would be small integers.)

Pixmap Destroy Notify

The Compositing manager needs to know when the application is done with the pixmap so that it can clean up when it is also done; destroying the extra pixmap ID it was given and freeing any other local resources. When the application sends a FreePixmap request, that XID will become invalid, but of course the pixmap itself won’t be freed until the compositing manager sends a FreePixmap request with the extra XID it was given.

Because the pixmap ID known by the compositing manager is going to be different from the original application XID, we need an event delivered to the compositing manager with the new XID, which makes this event rather Present specific. We don’t need to select for this event; the compositing manager must handle it correctly, so we’ll just send it whenever the composting manager has received that custom XID.

PresentPixmapDestroyNotify

┌───
    PresentPixmapDestroyNotify
    type: CARD8         XGE event type (35)
    extension: CARD8        Present extension opcode
    sequence-number: CARD16
    length: CARD32          0
    evtype: CARD16          Present_PixmapDestroyNotify
    eventID: XFIXESEVENTID
    event-window: WINDOW
    pixmap: pixmap
└───

This event is delivered to the clients selecting for SubredirectNotify for pixmaps which were delivered in a PresentRedirectNotify event and for which the originating application Pixmap has been destroyed. Note that the pixmap may still be referenced by other objects within the X server, by a pending PresentPixmap request, as a window background, GC tile or stipple or Picture drawable (among others), but this event serves to notify the selecting client that the application is no longer able to access the underlying pixmap with it’s original Pixmap ID.

Posted Wed Jan 1 20:20:40 2014 Tags:

Object Lifetimes under Present Redirection

Present extension redirection passes responsibility for showing application contents from the X server to the compositing manager. This eliminates an extra copy to the composite extension window buffer and also allows the application contents to be shown in the right frame.

(Currently, the copy from application buffer to window buffer is synchronized to the application-specified frame, and then a Damage event is delivered to the compositing manager which constructs the screen image using the window buffer and presents that to the screen at least one frame later, which generally adds another frame of delay for the application.)

The redirection operation itself is simple — just wrap the PresentPixmap request up into an event and send it to the compositing manager. However, the pixmap to be presented is allocated by the application, and hence may disappear at any time. We need to carefully control the lifetime of the pixmap ID and the specific frame contents of the pixmap so that the compositing manager can reliably construct complete frames.

We’ll separately discuss the lifetime of the specific frame contents from that of the pixmap itself. By the “frame contents”, I mean the image, as a set of pixel values in the pixmap, to be presented for a specific frame.

Present Pixmap contents lifetime

After the application is finished constructing a particular frame in a pixmap, it passes the pixmap to the X server with PresentPixmap. With non-redirected Present, the X server is responsible for generating a PresentIdleNotify event once the server is finished using the contents. There are three different cases that the server handles, matching the three different PresentCompleteModes:

  1. Copy. The pixmap contents are not needed after the copy operation has been performed. Hence, the PresentIdleNotify event is generated when the target vblank time has been reached, right after the X server calls CopyArea

  2. Flip. The pixmap is being used for scanout, and so the X server won’t be done using it until some other scanout buffer is selected. This can happen as a result of window reconfiguration which makes the displayed window no longer full-screen, but the usual case is when the application presents a subsequent frame for display, and the new frame replaces the old. Thus, the PresentIdleNotify event generally occurs when the target vblank time for the subsequent frame has been reached, right after the subsequent frame’s pixmap has been selected for scanout.

  3. Skip. The pixmap contents will never be used, and the X server figures this out when a subsequent frame is delivered with a matching target vblank time. This happens when the subsequent Present operation is queued by the X server.

In the Redirect case, the X server cannot tell when the compositing manager is finished with the pixmap. The same three cases as above apply here, but the results are slightly different:

  1. Composite. The pixmap is being used as a part of the screen image and must be composited with other window pixmaps. In this case, the compositing manager will need to hold onto the pixmap until a subsequent pixmap is provided by the application. Thus, the pixmap will remain needed by the compositing manager until it receives a subsequent PresentRedirectNotify for the same window.

  2. Flip. The compositing manager is free to take the application pixmap and use it directly in a subsequent PresentPixmap operation and let the X server ‘flip’ to it; this provides a simple way of avoiding an extra copy while not needing to fuss around with ‘unredirecting’ windows. In this case, the X server will need the pixmap contents until a new scanout pixmap is provided, and the compositing manager will also need the pixmap in case the contents are needed to help construct a subsequent frame.

  3. Skip. In this case, the compositing manager notices that the window’s pixmap has been replaced before it was ever used.

In case 2, the X server and the compositing manager will need to agree on when the PresentIdleNotify event should be delivered. In the other two cases, the compositing manager itself will be in charge of that.

To let the compositing manager control when the event is delivered, the X server will count the number of PresentPixmap redirection events sent, and the compositing manager will deliver matching PresentIdle requests.

PresentIdle

┌───
    PresentIdle
    pixmap: PIXMAP
└───
Errors: Pixmap, Match

Informs the server that the Pixmap passed in a PresentRedirectNotify event is no longer needed by the client. Each PresentRedirectNotify event must be matched by a PresentIdle request for the originating client to receive a PresentIdleNotify event.

PresentRedirect pixmap ID lifetimes

A compositing manager will want to know the lifetime of the pixmaps delivered in PresentRedirectNotify events to clean up whatever local data it has associated with it. For instance, GL compositing managers will construct textures for each pixmap that need to be destroyed when the pixmap disappears.

Some kind of PixmapDestroyNotify event is necessary for this; the alternative is for the compositing manager to constantly query the X server to see if the pixmap IDs it is using are still valid, and even that isn’t reliable as the application may re-use pixmap IDs for a new pixmap.

It seems like this PixmapDestroyNotify event belongs in the XFixes extension—it’s a general mechanism that doesn’t directly relate to Present. XFixes doesn’t currently have any Generic Events associated, but adding that should be fairly straightforward. And then, have the Present extension automatically select for PixmapDestroyNotify events when it delivers the pixmap in a PresentRedirectNotify event so that the client is ensured of receiving the associated PixmapDestroyNotify event.

One remaining question I have is whether this is sufficient, or if the compositing manager needs a stable pixmap ID which lives beyond the life of the application pixmap ID. If so, the solution would be to have the X server allocate an internal ID for the pixmap and pass that to the client somehow; presumably in addition to the original pixmap ID.

XFixesPixmapSelectInput

XFIXESEVENTID { XID }

    Defines a unique event delivery target for Present
    events. Multiple event IDs can be allocated to provide
    multiple distinct event delivery contexts.

PIXMAPEVENTS { XFixesPixmapDestroyNotify }

┌───
    XFixesPixmapSelectInput
    eventid: XFIXESEVENTID
    pixmap: PIXMAP
    events: SETofPIXMAPEVENTS
└───
Errors: Pixmap, Value

Changes the set of events to be delivered for the target pixmap. A Value error is sent if ‘events’ contains invalid event selections.

XFixesPixmapDestroyNotify

┌───
    XFixesPixmapDestroyNotify
    type: CARD8         XGE event type (35)
    extension: CARD8        XFixes extension request number
    sequence-number: CARD16
    length: CARD32          0
    evtype: CARD16          XFixes_PixmapDestroyNotify
    eventID: XFIXESEVENTID
    pixmap: pixmap
└───

This event is delivered to all clients selecting for it on ‘pixmap’ when the pixmap ID is destroyed by a client. Note that the pixmap may still be referenced by other objects within the X server, as a window background, GC tile or stipple or Picture drawable (among others), but this event serves to notify the selecting client that the ID is no longer associated with the underlying pixmap.

Posted Sat Dec 28 13:38:45 2013 Tags:

All Entries