my recent reads..

Atomic Accidents: A History of Nuclear Meltdowns and Disasters; From the Ozark Mountains to Fukushima
Power Sources and Supplies: World Class Designs
Red Storm Rising
Locked On
Analog Circuits Cookbook
The Teeth Of The Tiger
Sharpe's Gold
Without Remorse
Practical Oscillator Handbook
Red Rabbit

Monday, February 08, 2016

LittleArduinoProjects#174 USB LED Notifiers

So four of these USB Webmail Notifier devices turned up in a dusty cupboard
in the office.

A quick tear-down shows they contain a super-simple circuit - just a
SONiX Technology SN8P2203SB 8-Bit microcontroller that handles the USB protocol and drives an RGB LED. The SN8P2203SB is an old chip phased out 2010/04/30, superseded by the SN8P2240. They have a supremely primitive USB implementation - basically mimicking a very basic USB 1.0 HID device.

A quick google reveals quite a bit of old code lying around for various projects using devices like this. Most seem to use libusb for convenience - and often 0.1 legacy libusb that. As I'm mainly on MacOSX, the code is not much use since Apple no longer allows claiming of HID devices
and the libusb team decided not to try to get around that.

So to bring things up-to-date, I wrote a simple demo using hidapi
and things all work fine - see the video below.

Now I just need to ponder on good ideas for what to do with these things!

As always, all notes and code are on GitHub.

Sunday, February 07, 2016

LittleArduinoProjects#173 Mini 64-LED Cube

LED cubes were a "thing" a few years back maybe ... but I've never built one. Time to fix that...

Here's my "mini" 4x4x4 cube - 3cm per side with 3mm clear blue LEDs. Pretty compact, and delivers nice effects. The clear blue LEDs work really well - very bright, even when driven with minimal current.

It's encased in a Ferrero Rocher cube box. During the build, that raised some challenges - most of the effort in building the project concerned squeezing all the electronics into the space in the lid (which becomes the base of the cube). Not quite as small as HariFun's "World's Tiniest RGB LED Cube" but about as small as you can get without resorting to SMD LEDs!

All notes, schematics and code are on GitHub as usual. Here's a quick demo:

Sunday, June 28, 2015

LittleArduinoProjects#100 Retrogaming on an Arduino/OLED "console"

(blogarhythm ~ invaders must die - The Prodigy)
Tiny 128x64 monochrome OLED screens are cheap and easy to come by, and quite popular for adding visual display to a microcontroller project.

My first experiments in driving them with raw SPI commands had me feeling distinctly old school, as the last time remember programming a bitmap screen display was probably about 30 years ago!

So while in a retro mood, what better than to attempt an arcade classic? At first I wasn't sure it was going to be possible to make a playable game due to the limited Arduino memory and relative slow screen communication protocol.

But after a few tweaks of the low-level SPI implementation, I was surprised myself at how well it can run. Even had enough clock cycles left to throw in a sound track and effects.

Here's a quick video on YouTube of the latest version. ArdWinVaders! .. in full lo-rez monochrome glory, packed into 14kb and speeding along at 8MHz.



Full source and schematics are in the LittleArduinoProjects collection on Github.

Sunday, February 01, 2015

LittleArduinoProjects#018 The Fretboard - a multi-project build status monitor

(blogarhythm ~ Diablo - Don't Fret)

The Fretboard is a pretty simple Arduino project that visualizes the build status of up to 24 projects with an addressable LED array. The latest incarnation of the project is housed in an old classical guitar … hence the name ;-)

All the code and design details for The Fretboard are open-source and available at fretboard.tardate.com. Feel free to fork or borrow any ideas for your own build. If you build anything similar, I'd love to hear about it.

Wednesday, January 14, 2015

cancannible role-based access control gets an update for Rails 4

Can You Keep a Secret? / 宇多田ヒカル

cancannible is a gem that has been kicking around in a few large-scale production deployments for years. It still gets loving attention - most recently an official update for Rails 4 (thanks to the push from @zwippie).

And now also some demo sites - one for Rails 3.2.x and another for Rails 4.3.x so that anyone can see it in action.


So what exactly does cancannible do? In a nutshell, it is a gem that extends CanCan with a range of capabilities:

  • permissions inheritance (so that, for example, a User can inherit permissions from Roles and/or Groups)
  • general-purpose access refinements (to automatically enforce multi-tenant or other security restrictions)
  • automatically stores and loads permissions from a database
  • optional caching of abilities (so that they don't need to be recalculated on each web request)
  • export CanCan methods to the model layer (so that permissions can be applied in model methods, and easily set in a test case)

Friday, January 09, 2015

555 Timer simulator with HTML5 Canvas

The 555 timer chip has been around since the '70s, so does the world really need another website for calculating the circuit values?

No! But I made one anyway. It's really an excuse to play around with HTML5 canvas and demonstrate a grunt & coffeescript toolchain.

See this running live at visual555.tardate.com, where you can find more info and links to projects and source code on GitHub.


(blogarhythm ~ Time's Up / Living Colour)

Tuesday, January 28, 2014

Learning Devise for Rails

(blogarhythm ~ Points of Authority / Linkin Park)

I recently got my hands on a review copy of Learning Devise for Rails from Packt and was quite interested to see if it was worth a recommendation (tldr: yes).

A book like this has to be current. Happily this edition covers Rails 4 and Devise 3, and code examples worked fine for me with the latest point releases.

The book is structured as a primer and tutorial, perfect for those who are new to devise, and requires only basic familiarity with Rails. Tutorials are best when they go beyond the standard trivial examples, and the book does well on this score. It covers a range of topics that will quickly become relevant when actually trying to use devise in real life. Beyond the basic steps needed to add devise in a Rails project, it demonstrates:
  • customizing devise views
  • using external authentication providers with Omniauth
  • using NoSQL storage (MongoDB) instead of ActiveRecord (SQLite)
  • integrating access control with CanCan
  • how to test with Test::Unit and RSpec

I particularly appreciate the fact that the chapter on testing is even there in the first place! These days, "how do I test it?" should really be one of the first questions we ask when learning something new.

The topics are clearly demarcated so after the first run-through the book can also be used quite well as a cookbook. It does however suffer from a few cryptic back-references in the narrative, so to dive in cookbook-style you may find yourself having to flip back to previous sections to connect the dots. A little extra effort on the editing front could have improved this (along with some of the phraseology, which is a bit stilted in parts).

Authentication has always been a critical part of Rails development, but since Rails 3 in particular it is fair to say that devise has emerged as the mature, conventional solution (for now!). So I can see this book being the ideal resource for developers just starting to get serious about building Rails applications.

Learning Devise for Rails would be a good choice if you are looking for that shot of knowledge to fast-track the most common authentication requirements, but also want to learn devise in a little more depth than a copy/paste from the README and wiki will get you! It will give enough foundation for moving to more advanced topics not covered in the book (such as developing custom strategies, understanding devise and warden internals).

Tuesday, November 12, 2013

Punching firewalls with Mandrill webhooks

(blogarhythm ~ Fire Cracker - Ellegarden)

Mandrill is the transactional email service by the same folks who do MailChimp. I've written about it before, in particular how to use the mandrill-rails gem to simplify inbound webhook processing.

Mandrill webhooks are a neat, easy way for your application to respond to various events, from recording when users open email, to handling inbound mail delivery.

That all works fine if your web application lives on the public internet i.e. Mandrill can reach it to post the webhook. But that's not always possible: your development/test/staging environments for example; or perhaps production servers that IT have told you must be "locked down to the max".

Mandrill currently doesn't offer an official IP whitelist, so it's not possible to use firewall rules to just let Mandrill servers in. Mandrill does provide webhook authentication (supported by the mandrill-rails gem), but that solves a different problem: anyone can reach your server, but you can distinguish the legitimate webhook requests.

I thought I'd share a couple of techniques I've used to get Mandrill happily posting webhooks to my dev machine and servers behind firewalls.

Using HAProxy to reverse-proxy Mandrill Webhooks

If you have at least one internet-visible address, HAProxy is excellent for setting up a reverse-proxy to forward inbound Mandrill Webhooks to the correct machine inside the firewall. I'm currently using this for some staging servers so we can run real inbound mail scenarios.

Here's a simple scenario:
  • gateway.mydomain.net - your publicly-visible server, with HAProxy installed and running OK
  • internal/192.168.0.1 - a machine on an internal network that you want to receive webooks posted to 192.168.0.1:8080/inbox

Say the gateway machine already hosts http://gateway.mydomain.net, but we want to be able to tell Mandrill to post it's webhooks to http://gateway.mydomain.net/inbox_internal, and these (and only these) requests will be forwarded to http://192.168.0.1:8080/inbox.

Here are the important parts of the /etc/haproxy/haproxy.cfg used on the gateway machine:
global
  #...
 
defaults
  mode http                                # enable http mode which gives of layer 7 filtering
  #...

# this is HAProxy listening on http://gateway.mydomain.net
frontend app *:80                          
  default_backend webapp                   # set the default server for all requests
  # next we define a rule that will send requests to the internal_mandrill backend instead if the path starts with /inbox_internal
  acl req_mandrill_inbox_path path_beg /inbox_internal 
  use_backend internal_mandrill if req_mandrill_inbox_path 

# define a group of backend servers for the main web app
backend webapp                             
  server app1 127.0.0.1:8001             
 
# this is where we will send the Mandrill webhooks
backend internal_mandrill                  
  reqirep     ^([^\ ]*)\ /inbox_internal(.*) \1\ /inbox\2 
  server int1 192.168.0.1:8080  # add a server to this backend
Obviously the path mapping is optional (but neat to demonstrate), and I've left out all the normal HAProxy options like balancing, checks and SSL option forwarding that you might require in practice, but are not relevant to the issue at hand.

Job done! Our internal server remains hidden behind the firewall, but Mandrill can get through to it by posting webhooks to http://gateway.mydomain.net/inbox_internal.

Tunneling to dev machines with ngrok

For development, we usually don't want anything so permanent. There are quite a few services for tunneling to localhost, mainly with developers in mind. Lately I've been using ngrok which is living up to it's name - it rocks! Trival to setup and works like a dream. Say I'm developing a Rails app:
# run app locally (port 3000)
rails s
# run ngrok tunnel to port 3000
ngrok 3000

Once started, ngrok will give you http and https addresses that will tunnel to port 3000 on your machine. You can use these addresses in the Mandrill webhook and inbound domains configuration, and they'll work as long as you keep your app and ngrok running.

Sunday, June 23, 2013

Design thinking is not rocket science

OH on the ABC Radio National By Design podcast (00:56): In the field: Paul Bennet
For us the idea of small ideas that people can actually connect together and actually implement are very big ideas.

And I'm sure you've heard people describe design thinking as sort of a combination of rocket science, string theory and calculus.

It isn't. It's not rocket science at all. It's actually very very straight-forward.

It's looking in the world, being inspired by people, co-creating with them, prototyping and then iterating. And it has to be impactful. It has to work.

Writing simple ruby utilities for Google IMAP + OAuth 2.0


(blogarhythm ~ Unpretty/Fanmail: TLC)

There are some good ruby gems available for dealing with OAuth 2.0 and talking to Google APIs, for example:
  • google-api-client is the official Google API Ruby Client makes it trivial to discover and access supported APIs.
  • oauth2-client provides generic OAuth 2.0 support that works not just with Google
  • gmail_xoauth implements XAUTH2 for use with Ruby Net::IMAP and Net::SMTP
  • gmail provides a rich Ruby-esque interface to GMail but you need to pair it with gmail_xoauth for OAuth 2 support (also seems that it's in need of a new release to merge in various updates and extensions people have been working on)

For the task I had at hand, I just wanted something simple: connect to a mailbox, look for certain messages, download and do something with the attachments and exit. It was going to be a simple utility to put on a cron job.

No big deal. The first version simple used gmail_xoauth to enable OAuth 2.0 support for IMAP, and I added some supporting routines to handle access_token refreshing.

It worked fine as a quick and dirty solution, but had a few code smells. Firstly, too much plumbing code. But most heinously - you might seen this yourself if you've done any client utilities with OAuth - it used the widely-recommended oauth2.py Python script to orchestrate the initial authorization. For a ruby tool!

Enter the GmailCli gem

So I refactored the plumbing into a new gem called gmail_cli and it is intended for one thing: a super-simple way to whip up utilities that talk to Google IMAP and providing all the OAuth 2.0 support you need. It actually uses google-api-client and gmail_xoauth under the covers for the heavy lifting, but wraps them up in a neat package with the simplest interface possible. Feel free to go use and fork it!

With gmail_cli in your project, there are just 3 things to do:

  1. If you haven't already, create your API project credentials in the Google APIs console (on the "API Access" tab)
  2. Use the built-in rake task or command-line to do the initial authorization. You would normally need to do this only once for each deployment:
    $ rake gmail_cli:authorize client_id='id' client_secret='secret'
    $ gmail_cli authorize --client_id 'id' --client_secret 'secret'
  3. Use the access and refresh tokens generated in step 2 to get an IMAP connection in your code. This interface takes care of refreshing the access token for you as required each time you use it:
    # how you store or set the credentials Hash is up to you, but it should have the following keys:
    credentials = {
      client_id:     'xxxx',
      client_secret: 'yyyy',
      access_token:  'aaaa',
      refresh_token: 'rrrr',
      username:      'name@gmail.com'
    }
    imap = GmailCli.imap_connection(credentials)

A Better Way?

Polling a mailbox is a terrible thing to have to do, but sometimes network restrictions or the architecture of your solution makes it the best viable option. Much better is to be reactive to mail that gets pushed to you as it is delivered.

I've written before about Mandrill, which is the transactional email service from the same folks who do MailChimp. I kinda love it;-) It is perfect if you want to get inbound mail pushed to your application instead of polling for it. And if you run Rails, I really would encourage you to checkout the mandrill-rails gem - it adds Mandrill inbound mail processing to my Rails apps with just a couple of lines of code.

Tuesday, June 18, 2013

Ruby Tuesday

(blogarhythm ~ Ruby - Kaiser Chiefs)
@a_matsuda convinced us to dive into Ruby 2.0 at RedDotRubyConf, so I guess this must be the perfect day of the week for it!

Ruby 2.0.0 is currently at p195, and we heard at the conference how stable and compatible it is.

One change we learned that may catch us if we do much multilingual work that's not already unicode is the change that Ruby now assumes UTF-8 encoding for source files. So the special "encoding: utf-8" marker becomes redundant, but if we don't include it the behaviour in 2.0.0 can differ from earlier versions:
$ cat encoding_binary.rb 
s = "\xE3\x81\x82"
p str: s, size: s.size
$ ruby -v encoding_binary.rb 
ruby 2.0.0p195 (2013-05-14 revision 40734) [x86_64-darwin11.4.2]
{:str=>"あ", :size=>1}
$ ruby -v encoding_binary.rb 
ruby 1.9.3p429 (2013-05-15 revision 40747) [x86_64-darwin11.4.2]
{:str=>"\xE3\x81\x82", :size=>3}

Quickstart on MacOSX with RVM

I use rvm to help manage various Ruby installs on my Mac, and trying out new releases is exactly the time you want it's assistance to prevent screwing up your machine. There were only two main things I needed to take care of to get Ruby 2 installed and running smoothly:
  1. Update rvm so it knows about the latest Ruby releases
  2. Update my OpenSSL installation (it seems 1.0.1e is required although I haven't found that specifically documented anywhere)
Here's a rundown of the procedure I used in case it helps (note, I am running MacOSX 10.7.5 with Xcode 4.6.2). First I updated rvm and attempted to install 2.0.0:
$ rvm get stable
# => updated ok
$ rvm install ruby-2.0.0
Searching for binary rubies, this might take some time.
No binary rubies available for: osx/10.7/x86_64/ruby-2.0.0-p195.
Continuing with compilation. Please read 'rvm mount' to get more information on binary rubies.
Installing requirements for osx, might require sudo password.
-bash: /usr/local/Cellar/openssl/1.0.1e/bin/openssl: No such file or directory
Updating certificates in ''.
mkdir: : No such file or directory
Password:
mkdir: : No such file or directory
Can not create directory '' for certificates.
Not good!!! What's all that about? Turns out to be just a very clumsy way of telling me I don't have OpenSSL 1.0.1e installed.

I already have OpenSSL 1.0.1c installed using brew (so it doesn't mess with the MacOSX system-installed OpenSSL), so updating is simply:
$ brew upgrade openssl
==> Summary
 /usr/local/Cellar/openssl/1.0.1e: 429 files, 15M, built in 5.0 minutes
So then I can try the Ruby 2 install again, starting with the "rvm requirements" command to first make sure all pre-requisites are installed:
$ rvm requirements
Installing requirements for osx, might require sudo password.
[...]
Tapped 41 formula
Installing required packages: apple-gcc42.................
Updating certificates in '/usr/local/etc/openssl/cert.pem'.
$ rvm install ruby-2.0.0
Searching for binary rubies, this might take some time.
No binary rubies available for: osx/10.7/x86_64/ruby-2.0.0-p195.
Continuing with compilation. Please read 'rvm mount' to get more information on binary rubies.
Installing requirements for osx, might require sudo password.
Certificates in '/usr/local/etc/openssl/cert.pem' already are up to date.
Installing Ruby from source to: /Users/paulgallagher/.rvm/rubies/ruby-2.0.0-p195, this may take a while depending on your cpu(s)
[...]
$ 
OK, this time it installed cleanly as I can quickly verify:
$ rvm use ruby-2.0.0
$ ruby -v
ruby 2.0.0p195 (2013-05-14 revision 40734) [x86_64-darwin11.4.2]
$ irb -r openssl
2.0.0p195 :001 > OpenSSL::VERSION
 => "1.1.0"
2.0.0p195 :002 > OpenSSL::OPENSSL_VERSION
 => "OpenSSL 1.0.1e 11 Feb 2013"

Saturday, June 15, 2013

Optimising presence in Rails with PostgreSQL

(blogarhythm ~ Can't Happen Here - Rainbow)
It is a pretty common pattern to branch depending on whether a query returns any data - for example to render a quite different view. In Rails we might do something like this:
query = User.where(deleted_at: nil).and_maybe_some_other_scopes
if results = query.presence
  results.each {|row| ... }
else
  # do something else
end
When this code executes, we raise at least 2 database requests: one to check presence, and another to retrieve the data. Running this at the Rails console, we can see the queries logged as they execute, for example:
(0.9ms)  SELECT COUNT(*) FROM "users" WHERE "users"."deleted_at" IS NULL
 User Load (15.2ms)  SELECT "users".* FROM "users" WHERE "users"."deleted_at" IS NULL
This is not surprising since under the covers, presence (or present?) end up calling count which must do the database query (unless you have already accessed/loaded the results set). And 0.9ms doesn't seem too high a price to pay to determine if you should even try to load the data, does it?

But when we are running on PostgreSQL in particular, we've learned to be leery of COUNT(*) due to it's well known performance problems. In fact I first started digging into this question when I started seeing expensive COUNT(*) queries show up in NewRelic slow transaction traces. How expensive COUNT(*) actually is depends on many factors including the complexity of the query, availability of indexes, size of the table, and size of the results set.

So can we improve things by avoiding the COUNT(*) query? Assuming we are going to use all the results anyway, and we haven't injected any calculated columns in the query, we could simply to_a the query before testing presence i.e.:
query = User.where(deleted_at: nil).and_maybe_some_other_scopes
if results = query.to_a.presence
  results.each {|row| ... }
else
  # do something else
end

I ran some benchmarks comparing the two approaches with different kinds of queries on a pretty well-tuned system and here are some of the results:
Query Using present? Using to_a Faster By
10k indexed queries returning 1 / 1716 rows 17.511s 10.938s 38%
4k complex un-indexed queries returning 12 / 1716 rows 23.603s 15.221s 36%
4k indexed queries returning 1 / 1763218 rows 22.943s 20.924s 9%
10 complex un-indexed queries returning 15 / 1763218 rows 23.196s 14.072s 40%

Clearly, depending on the type of query we can gain up to 40% performance improvement by restructuring our code a little. While my aggregate results were fairly consistent over many runs, the performance of individual queries did vary quite widely.

I should note that the numbers were *not* consistent or proportional across development, staging, test and production environments (mainly due to differences in data volumes, latent activity and hardware) - so you can't benchmark on development and assume the same applies in production.

Things get murky with ActiveRecord add-ons

So far we've talked about the standard ActiveRecord situation. But there are various gems we might also be using to add features like pagination and search magic. MetaSearch is an example: a pretty awesome gem for building complex and flexible search features. But (at least with version 1.1.3) present? has a little surprise in store for you:
irb> User.where(id: '0').class
=> ActiveRecord::Relation
irb> User.where(id: 0).present?
   (0.8ms)  SELECT COUNT(*) FROM "users" WHERE "users"."id" = 0
=> false
irb> User.search(id_eq: 0).class
=> MetaSearch::Searches::User
irb> User.search(id_eq: 0).present?
=> true

Any Guidelines?

So, always to_a my query results? Well, no, it's not that simple. Here are some things to consider:
  • First, don't assume that <my_scoped_query>.present? means what you think it might mean - test or play it safe
  • If you are going to need all result rows anyway, consider calling to_a or similar before testing presence
  • Avoid this kind of optimisation except at the point of use. One of the beauties of ActiveRecord::Relation is the chainability - something we'll kill as soon as we hydrate to a result set Array for example.
  • While I got a nice 40% performance bonus in some cases with a minor code fiddle, mileage varies and much depends on the actual query. You probably want to benchmark in the actual environment that matters and not make any assumptions.

Sunday, June 09, 2013

My Virtual Swag from #rdrc

(blogarhythm ~ Everybody's Everything - Santana)

So the best swag you can get from a technology conference is code, right? Well RedDotRubyConf 2013 did not disappoint! Thanks to some fantastic speakers, my weekends for months to come are spoken for. Here's just some of the goodness:

Will I still be a Rubyist in 5 years? #rdrc

(blogarhythm ~ Ruby - Kaiser Chiefs)
The third RedDotRubyConf is over, and I think it just keeps getting better! Met lots of great people, and saw so many of my Ruby heroes speak on stage. Only thing that could make it even better next year would be to get the video recording thing happening!

I had the humbling opportunity to share the stage and here are my slides. Turned out to be a reflection on whether I'd still be a Rubyist in another 5 years, and what are the external trends that might change that. Short story: Yes! Of course. I'll always think like a Rubyist even though things will probably get more polyglot. The arena of web development is perhaps the most unpredictable though.

A couple of areas I highlight that really need a bit more love include:
  • There's a push on SciRuby. Analytics are no longer the esoteric domain of bioinformaticists. Coupled with Big Data (which Ruby is pretty good at), analytics are driving much of the significant innovation in things we build.
  • Krypt - an effort lead by Martin Boßlet to improve the cryptographic support in Ruby. My experience building megar made it painfully obvious why we need to fix this.

Let it never be said, the romance is dead
'Cos there’s so little else occupying my head

I mentioned a few of my projects in passing. Here are the links for convenience:
  • RGovData is a ruby library for really simple access to government data. It aims to make consuming government data sets a "one liner", letting you focus on what you are trying to achieve with the data, and happily ignore all the messy underlying details of transport protocols, authentication and so on.
  • sps_bill_scanner is a ruby gem for converting SP Services PDF invoices into data that can be analysed with R. Only useful if you are an SP Services subscriber in Singapore, but other wise perhaps an interesting example of extracting postitional text from PDF and doing some R.
  • megar ("megaargh!" in pirate-speak) is a Ruby wrapper and command-line (CLI) client for the mega.co.nz API. My example of how you *can* do funky crypto in Ruby ... it's just much harder than it should be!

Sunday, March 17, 2013

Amplifying Human Emotion

(blogarhythm ~ Sweet Emotion 相川七瀬)

It all comes back to connectivity. Om Malik (TWiST #327 @00:37:30) has a brilliant characterization of the true impact of the internet:
human emotion amplified at network scale

Sunday, March 10, 2013

Rolling the Mega API with Ruby

(blogarhythm ~ Can you keep a secret? - 宇多田ヒカル)

Megar (“megaargh!” in pirate-speak) is a Ruby wrapper and command-line client for the Mega API.

In the current release (gem version 0.0.3), it has coverage of the basic file/folder operations: connect, get file/folder listings and details, upload and download files. You can use it directly in Ruby with what I hope you'll find is a very sane API, but it also sports a basic command-line mode for simple listing, upload and download tasks.

If you are interested in hacking around with Mega, and prefer to do it in Ruby, give it a go! Like this:
# do a complete folder/file listing
session = Megar::Session.new(email: 'my@email.com', password: 'my_password')
session.folders.each do |folder|
  folder.files.each do |file|
    puts file.name
  end
end
# upload a file
file_handle = '../my_files/was_called_this.mp3'
session.files.create( name: 'First.mp3', body: file_handle )
Or from the command line:
$ megar -e my@email.com -p my_password ls
$ megar -e my@email.com -p my_password put *.pdf
I would still call it "experimental" at this stage because it needs more widespread hammering, and of course the Mega API is not fully documented yet. There are many more features of the API that it would be good to support, and I'd love for others to pitch in and help - go fork it on github!

I was keen to get a Mega account and check it out when the launch publicity hit, and was immediately impressed by the web interface. Very slick. Notwithstanding some of the intense analysis and some criticism (for example by SpiderOak and Security Now), the "trust no-one" design approach is very interesting to contemplate and hack around with.

The Mega API is still evolving. The documentation is thin and the main resource we have to work with is the Javascript reference implementation that actually runs the Mega site. But there has been quite a bit of work in the community to hack on the API - particularly in Python (with API analysis and projects like mega.py).

It didn't take me long to realise there was nothing much going on with Ruby. After a bit of messing around, I think the main reason for that is the pretty wretched state of cryptographic support in Ruby. Unlike Python (which has PyCrypto amongst others I'm sure), in Ruby we still on the whole get by with thin wrappers on OpenSSL that look and smell distinctly C-dy. But that's a challenge for another day...

For now I'm pretty happy that Megar has all the main crypto challenges solved (after a bit of low-level reverse engineering supported by a healthy dose of TDD). Now I wonder what I'm going to use it for?