Easy one this.. Press Guide, then Green, and "Hey Presto" you are at your planner. Just use the old Mark One Eyeball to make sense of all that data.
Of course, if you want to do something with that data.. say track which Films you have recorded, how long they were, and how much space they took up.. you'll need to link the Mark One Eyeball to some form of data storage, ideally accessible as at least text.
After you've written down the content of your planner a few times, you'll realise that a) it's dull, and b) it's very dull. The solution is obviously to automate it.. but how??
Well, Sky doesn't provide a nice API to query their set top boxes, in fact, Sky doesnt provide Any API to query with, which leaves us back at emulating being a human.
So, start by gaining control of your Sky box, record the remote up/down/left/right/select/back/guide/green button presses into a usb controller IR sender. I'm using the old MCE USB emitter, but I've had great success with a RedRat USB IR Dongle too. For now, lets just assume that you've achieved that bit, and now have the ability to send button presses from a script.
Use the script to call up the planner, and navigate down through each entry, entering each one, and then backing out to the list. Stay 'inside' each planner entry for a few seconds.
Then, record the video output of the script executing. For what we're doing, the old s/video or scart output is fine, hook that up to the PC, and have it record the 'walk through the planner' to an avi, or mpeg, or ts, or whatever format you plan to use.
Next, we plan to use AviSubDetector to perform OCR on the recorded video. After all those planner entries are just like subtitles.. right ?
The 1st issue here is most likely that the file you've recorded isn't an AVI (well, they are getting kinda old now!) so we'll want to make it pretend to be one. (Even if it IS an AVI, we'll still want to do this, as we're going to get creative).
So, we install AviSynth, I used 2.5.8, and create a new .avs file in the same dir as the video recording, and give it the content..
Where 'MyPlannerWalktrough.TS' is the name of the video file you recorded.
Then we open the avs file (not the original video!) in AviSubDetector, and hit the preview button (below the OCR(Experimental) button) to see our video.. the long slidey bar under the preview button can be used to yoink the position in the video around.
If like me, loading the video through AviSynth caused it to become 'upside down', then change the line to be..
Which does pretty much what it says on the tin.
Ok, now if you happen to have the ability to record in high def, you'll notice that AviSubDetector seems best optimised for standard def video, and you either need to go record it again in SD, or cheat & resize the video on the fly.. say.. like this..
Even if you don't need to resize it, bear in mind all the x/y offsets in the rest of this post, will be assuming a 960,540 framesize.
Great.. so now you have a video in AviSubDetector.. go ahead and set up the crop to focus on the two lines that say the recording name, and length information, when 'inside' a planner entry. (Use the yoink bar to find a place where the recording is inside a planner entry, then use the settings tab, and adjust the crop sliders to highlight just those 2 lines of text.
If you run in OCR mode now.. everything will go horribly wrong. Because the Sky UI is using colors on those two lines that means sometimes deep blue is the font color, and sometimes it's the background color, and AviSubDetector doesn't like that. So.. we'll fix it in AviSynth.
vid = FlipVertical(DirectShowSource("MyPlannerWalkthrough.TS", fps=25)).Lanczos4Resize(960,540)
line1 = Crop(vid,0,290,960,30)
testForLine1 = Crop(line1,570,5,16,20)
invertedLine1 = Invert(line1)
line1 = ConditionalFilter(convertToYV12(testForLine1), line1, invertedLine1, "AverageLuma()", "lessthan", "100")
line2 = Crop(vid,0,320,960,30)
testForLine2 = Crop(line2,570,5,16,20)
invertedLine2 = Invert(line2)
line2 = ConditionalFilter(convertToYV12(testForLine2), line2, invertedLine2, "AverageLuma()", "lessthan", "100")
line1 = Levels(line1, 122,1,200,0,255)
line1 = GreyScale(line1)
line2 = Levels(line2, 122,1,200,0,255)
line2 = GreyScale(line2)
vid = Overlay(vid,line1,0,290)
That rather scary looking bit of code selects each of the lines using Crop, then extracts a small portion of each line (using Crop on the cropped part), then uses a ConditionalFilter to evaluate the "Luma" of the the small portion, and Inverts the colors for that segment if the small portion has Luma<100. Finally the color range in the resulting line is cropped, making light greys become white, and dark greys become black, then the entire line is flipped to greyscale (which thanks to the level adjust is more like black and white).
What did all that achieve? Pretty much it means those two lines from the UI will now always have white text, on a black background, regardless of what colors they started with.. pretty neat =).
Now when running AviSubDetector, it's pretty happy with the result.. Almost happy enough that this could work.. except for transitions.
Transitions are when the video moves from the Planner, to the Planner entry and back. During these periods there's a little cross-fade present from the set-top box or the video encoder.. AviSubDetector isn't a great fan of these, and tries to OCR the fuzzy mix of both sets of text muddled together. Worse, once it's decided that was the important thing to do, it then ignores the 'clean' data, because it didn't differ enough from the fuzzy gunk.
Ideally there would be some way to add a slight delay to the AviSubDetector to make it wait a while after a change.. but I gave up looking, and instead made the 1st 3 lines of my AviSynth script say..
vid = FlipVertical(DirectShowSource("2012_3_28_15_20_38.TS", fps=25)).Lanczos4Resize(960,540)
vid = ChangeFPS(vid, 0.5, false)
vid = ChangeFPS(vid, 25, false)
This basically tells AviSynth.. drop the framerate so that there is only 1 frame every 2 seconds, do this by throwing away all the other frames. This would be a pretty short video.. (and AviSubDetector is kinda expecting subtitles, at human speed, not per frame text).. so we then blow it back up to 25fps, which is done by duplicating all the frames.. Overall effect is the video now only updates once every 2 seconds, which means the chance of hitting a transition is a) very small, and b) if it does occur, 2 seconds later there should be a big enough change to cause a re-read.
With these changes in place, I almost had a clean read of the entire planner.. except one recording, which had a very short info .. just "98mins" turns out that made it too small for the OCR engine to care about.. this was caused by the Block Count being too high, tweaking it down to '1' (use Settings Tab, tick 'All(text)' and edit at panel at base of screen) solved that nicely.
So there you go.. how to 'read' the Sky Planner. Enough to discover the Name & Length of every recording.. Handy.