Finally got to the bottom of the drop outs / glitches.. Some were my fault in the code, some were due to a dodgy cable feeding the HD PVR from the set top box. With the replacement cable in place, I had it stream for 12 hours straight with no reported dropouts.. that's close to a record for this unit =).
With the new cable in place however, I could still make it glitch, but only when recording.. the very time you didn't want glitches ;p This was while writing data to a network share, mounted via cifs, and the glitches became more frequent if the same network share was used via another client.. all of which got me thinking...
I was only using a single thread to handle the read from the device, and write out to all socket clients, and to the recording file.. maybe the writes to the mounted share were taking too long, and the extra delay meant too much data built up at the device node, causing the connection to be dropped..
I had a chat with the MythTV developers in their IRC channel, figuring they must have hit stuff like this when adding support for the HD PVR, that gave a few ideas.. and I added a buffered writer thread (It's at this point you really miss Java). With a little careful use of locking, and buffer allocation though, I've got a buffered writer thread that writes buffers from a write list, and puts them back onto a free list to be reused.
(Ok, if you're reading the code, they are arrays, not lists, and thus are pre-allocated to a max length, but the array is only holding pointers to the buffers, not the buffers themselves, so the overall overhead isn't too bad).
With this new buffered writer in place, the glitches during write disappear.. and I discovered I needed to update the code to large file support (oops, should have seen that one coming!!), thankfully that was as simple as adding a few #defines.
Lastly, since I now had a buffer pool style solution running, I added a way to monitor the buffer usage..
will report the number of connected streaming clients, the number of buffers in the free pool, the number of buffers with data waiting to be written to disk, and the total number of buffers. In an ideal world the total number of buffers will stay small, but it will start climbing rapidly if the network gets busy.. Currently the code is set to allow a max of 4096 buffers, that seems enough to cope with my network, although the amount may need tweaking..
A quick check on the pi says that a char * pointer takes 4 bytes, so that's 32k for the free/write array storage, and holding 4096 buffers of 4k each, will eat 16MB of ram.. even on a 256MB pi that seems ok =). I've seen my total buffer usage creep up toward 11MB so I might even consider allowing 32MB (8192 buffers) if it helps guarantee those writes.
Now it's onto the coding of the java xmltv/imdb channel changing recording scheduler... I made pretty good progress on that so far (after getting myself temporarily blocked on Google & Yahoo ;p)
And watch out for those URL auto suggestions! it seems Chrome actually sends a speculative HTTP GET for a url it offers you, before you select it and hit enter.. obviously, this is bad when you want to request 'status' and it hits the 'stoprec' url ;p
Update: the device restarts are gone with this version.. but it still seems to write bad data if streaming clients join while recording to disk.. I'll try moving the streaming client writes to a buffered writer too.. see if that helps.. although it'll get fun with 2 consumers of the buffer data ;p
Update 2: have moved the socket writes to the buffer output thread, and added another thread to reclaim excessive free buffers over time.. this has fixed the joining client issue.. also tidied up a few bits .. new code to go soon when I get the http parser rewritten.