Posts in Category: Dev

Samsung Smart TV Dev – my progress so far…

In late 2011/early 2012, I volunteered to have a go at producing a Samsung TV app – a fairly basic videocast player app for a certain podcast network. As the technology used is HTML/JavaScript – it seemed interesting.

Given Samsung provide a sample app, it seemed to be an easy job. To make it a little more complicated, I wanted to provide an app covering several rss feeds.  So the idea was a “channel selector” page which took you to an episode list for the selected channel, where you could watch an episode.

Painfully, the actual video playing is done via custom Samsung code – so its not possible to fully develop in a browser environment.  You have to use the Samsung emulator.

So, the core of the app was developed in Coffeescript, using middleman to generate a static site to run inside the Samsung TV emulator. I used the serenade.js front end framework – sort of like Backbone.  To make the site flexible without having to release new versions via Samsung, a separate “config” site was used – this defined what podcast feeds to include, logos for them etc.  KeyboardJS was used to manage keyboard input – and make it fairly easy to switch between browser and emulator.

Unfortunately (?) the SDK is still being developed by Samsung, so code that worked initially, then broke when a new version of the SDK came out.

The latest version of this app is available on github – here and some sample config to drive the app.

Given the SDK issues, I reverted to doing a more basic app, that just played one feed.  This has progressed to the point that its had a few reviews with Samsung – I just need to get the time to sort out the last few items.  Its also a middleman based app (though its missing a Gemfile – doh).  The idea is that its a template for producing several apps – not as good from a user experience, but at least its minimal effort from a development perspective.

As I didnt have any ‘real’ Samsung kit to test it on, I have had to rely on the emulator, which worked fairly well – despite having to be run no Windows :(


Rails Engine to produce a photography portfolio/brochureware site.

I was asked to provide a flexible photography portfolio site – brochure type site.

Given Rails was on my “fad” list, I decided to have a go that way.

What I came up with was a ‘generic’ Rails Engine to do the work of pulling the galleries etc together. This was then used inside a specific Rails app which just had the photos/galleries/comments. The aim being that the site owner can arrange things as they want and then do a commit/push to heroku to update their site.

The Engine is open source, available on github here.

Its a bit dated now, using Rails 3.0.9 – although should be easy to update, as its quite simple :) – the Gemfile only has Rails, Capybara and sqlite3 – not sure why the last 2 are there – as no DB is used.

The controllers do the work of responding to various user requests – using the folder/image structure found in the host app to define the site structure.

In the model directory, there are several classes.  photo – is used to wrap each photo image, including a related thumbnail and caption. project – wraps a directory, tracking what images are in it. site_config – this handles parsing the overall site configuration, which is held in a YAML file in the host Rails app.

The lib directory defines the basic engine configuration – how it hooks into the host app.

Under test/dummy there is a minimal rails app using the engine – for testing.  Although I cant see a sample config file – perhaps it relied on the defaults :)

At the time, it seemed a good way to produce the site.  I’ve not had to use it again yet, and that will be the test of how ‘generic’ it is…

Ruby via the backdoor…

At work, our team does not do much development – we support a third party’s product.  Most work is around the edges, integrating it with the rest of the systems.

Some of that work is in Java, but recently I had a chance to sneak in a little Ruby (or  JRuby to be specific).

Why Ruby/JRuby? To me, the main reason is that its a more succinct way of expressing the program logic and combined with the JVM integration features of JRuby, it was an obvious choice.

The premise was we wanted to make the extension of a component flexible/scriptable and ruby seemed like a good fit.

Here are some of the highlights of what was done…

The requirement was to publish data from one source to a webservice.

The initial cut was a pipeline of classes that transformed the source data into target format and then calling the webservice. The problem with this solution was that the webservice was very slow – only able to handle around 16 calls per second – compared to the 100 messages per second that were coming from the source.

This was addressed by adding some threading via calling out to Java’s concurrent utilities

A threadpool was created (num_threads specifies number of threads to run):

@exec = java.util.concurrent.Executors.new_fixed_thread_pool $props["im.num_threads"].to_i

And then for each message that came in, its handling is passed to the threadpool

@exec.execute do
... # do work, happens in a separate thread

This meant we could hammer the webservice with a lot more calls, however the downside is that we need to be wary of concurrent issues.

Some of the processes in the pipeline were completely standalone and so were wrapped in Java ThreadLocal objects.

@time_formatter = $jruby_obj.threadLocal { }

Where the process was not standalone, but had shared data structures, then a mutex was used to ensure only one process accessed the structure at a time.

# create the mutex object - once for the shared object
@mutex =

# then when we need to control access to some data we do this
@mutex.synchronize do

Now, by specifying 20-30 threads, were do 100+ calls per second to the webservice :)

How to test a local gem (when no internet available)


I frequently use my laptop on the train with little or no internet connectivity.

Thus, when testing locally developed gems, besides running their own test suites, its good to test them with my/an app that uses the gem.  But I want to do this without pushing the gem onto rubygems/github and then pulling it down again.

These are my notes on how I tried to do it – but I did not get it working…

Firstly, I use RVM and Bundler – so any solution needs to be compatible with that.

From this ascii/rails-cast, it seems there should be a rake task of “rake install” which will install into the current ruby/gem environment.

So, the plan is:

  • to switch to my app’s ruby/gemset
  • rake install my dev gem
  • test the app.

However, even though the gem list showed the new gem version, it was not being used.  Perhaps I need to run bundle update…



I have found 2 options since this post:

AirPrint to an Ubuntu shared printer via CUPS


I have a printer linked to my Ubuntu server at home (sisko/10-ish), which is used for printing from anywhere in the house. It seems that the new AirPrint stuff in iOS4.2 can work with this, with a little jiggery pokery.

The links I have found on it so far are:

After following the above…

Sisko seems to have an issue with dbus and so is not appearing on the network ok – although it works for network printing from Mac OSX.

I also have a netbook running Ubuntu 10-something too – this seems to run dbus fine and appears on the network ok, but printing is not working…

Most importantly the printer shows up on the iPhone :)

Looks like I need to open up the CUPS security a little more…

Found this link which showed how to make OSX find CUPS printers:


cupsctl BrowseRemoteProtocols=cups

This link talks about amending papd config files, but doesnt seem to help:

[still not there yet, work in progress]

Wait a minute – its working!  It takes ages to send it from the iPad/iPhone, but its appearing on the printer – yay!

PS Sisko is mainly used for Time Machine backups –

PPS I found this link with a script for generating the avahi service file, will give it a try.

It seems to work – though just as slowly :(

Ubuntu apt-get upgrades kept back.

I usually use

sudo apt-get update
sudo apt-get upgrade

But sometimes things get “kept back”, like for example kernel upgrades.

You can do these using this command.

sudo aptitude safe-upgrade

And it gives you the option for the other updates.

How to setup Xcode to share code across iPhone projects

The long story is here:

The short story is here.

Global Settings, i.e. do this once:

  1. Set up a shared build output directory that will be shared by all Xcode projects.
  2. Add a “Source Tree” variable that Xcode can use to dynamically find the static library project. (for each project to be shared)

Per Project Settings, i.e. for each project that uses the shared code

  1. Add the shared project to this project, DON’T select “Copy items into…”
  2. On linked project properties/General tab, select “Relative to COCOS2D_SRC”
  3. On the Target properties/General tab, add “Direct Dependencies” for the shared libs, e.g. cocos2d
  4. On the Target properties/Build tab, ensure there is no hard coded path in the “Library Search Paths” also add a “User Header Search Path” of $(COCOS2D_SRC)
  5. Drag the required static libraries from the shared project into the “Link Binary with Libraries” folder for your target.

And thats it :)

Git – local uncommitted changes, cannot merge

So you’ve made some changes locally but there are changes on the server that you want – so you want to merge them into your workarea.

Great, git pull… but no, this aborts if there are local uncommitted changes :( .

So, instead, you – you can either throw away the local changes using “git reset –hard”.

Or if you want to keep them, use “git stash”, like this:

git stash – saves your changes away on a branch.
git pull to get the remote changes
git stash apply to apply your local changes to your workarea.

Why couldn’t they just merge, or at least have an option for it? :(

Git – making a local repository remote

You’ve started that funky new project on your PC, done a few bits, checked it into your local Git repository, but now you’ve decided to push it onto your remote repository.

Assumed that you are using ssh to connect to the remote machine.

Firstly, on the remote box, make the directory for your project, cd into it and then set it up as a git repository, like so:
git init --bare --shared
Then back in your project, where you’ve previously done a git init, git add . and git commit -m "comments", do this:
git remote add origin [user]@[git host]:[path to project dir on host]
This tells git where the remote origin is for your local project.
Then you need to push your local code into that repository, with this command:
git push origin master

This will create the repo.

There are 2 other things to be wary of:
– how to use your local repo and push/pull to the remote one.
When you first do a git pull back into your local repo, you may well get an error like this
You asked me to pull without telling me which branch you
want to merge with, and 'branch.master.merge' in
your configuration file does not tell me either. Please
specify which branch you want to merge on the command line and
try again (e.g. 'git pull ').
See git-pull(1) for details.

To fix this, you need to do tweak the local settings … but perhaps its easiest just to clone it again and things will be set as needed.

If you often merge with the same branch, you may want to
configure the following variables in your configuration

branch.master.remote =
branch.master.merge =
remote..url =
remote..fetch =

This does it quite well:

git config branch.master.remote origin && git config branch.master.merge refs/heads/master

– how to clone it, so you can push/pull too. TBD


This link also covers things quite well.