HUMBUG logo


HUMBUGers

Feed: rss rdf opml

July 26, 2014

Adrian SuttonDon’t Make Your Design Responsive

Every web based product is adding “responsive design” to their feature lists. Unfortunately in many cases that responsive design is actually making their product much harder to use on a variety of screen sizes instead of easier.

The problem is that common CSS libraries and grid systems, including the extremely popular bootstrap, imply that a design can be made responsive using fixed cut-off points and the design just automatically adjusts. In reality making a design responsive requires tailoring the cut off points so that the design adjusts at the points where it stops working well.

For example, let’s look at the bootstrap documentation and in particular the top menu it uses. The bootstrap documentation gets it right, we can shrink the window down right to the point where the menus only just fit and they stick to the full size design:

Menu remains full-size for as long as it can fit.

Menu remains full-size for as long as it can fit.

If we shrink the window further the menus wouldn’t fit anymore so they correctly switch to the hamburger style:

Menu collapses once it would no longer fit.

Menu collapses once it would no longer fit.

That’s the right way to do it. The cutoff points have been specifically tailored for the content. There are other stylistic changes as well as these structural ones – the designer believes that centred text works better for headings on smaller screens for example. That’s fine, they’re fairly arbitrary design decisions based on what the designer believes looks best. I’m focussed on structural issues.

To see what happens if we ignore let’s pretend that we add a new menu item:

Our "New Item" shown correctly when the window is wide enough.

Our “New Item” shown correctly when the window is wide enough.

But now when we shrink the window down, the breakpoint is in the wrong place:

The "New Item" menu no longer fits but causes incorrect wrapping because the break point is wrong.

The “New Item” menu no longer fits but causes incorrect wrapping because the break point is wrong.

Now the design breaks as we shrink the window because the break point hasn’t been specifically tailored to the content. This is the type of error that regularly happens when people think that a responsive grid system can automatically make their site responsive. The repercussions in this case aren’t particularly bad, but it can be significantly worse.

Recently Jenkins released a rewrite of their UI moving it to using bootstrap which has unfortunately gotten responsive design completely wrong (and sadly I’m yet to see anything it’s actually improved). Browser widths that used to work perfectly well with the desktop only site are now treated as mobile browsers and content wraps into a long column. What’s worse, the most useless content, the sidebar, is what’s shown at the top with the main content pushed way down the page. At other widths the design doesn’t fit but doesn’t wrap leaving some links completely inaccessible.

It would be much better if people stopped jumping on the responsive design bandwagon and just designed their site to work for desktop browsers unless they are prepared to fully invest and do responsive design right. Mobile browsers are designed to work well with sites designed for desktop and have lots of tools and techniques for coping with them. As you add responsive design and other adjustments designed for mobile you take responsibility for making the design work well everywhere from the browser onto yourself.  Using predefined breakpoints from a CSS library is unlikely to give the result you intend. It would be nice if CSS libraries stopped claiming that it will.

July 26, 2014 11:50 AM

July 20, 2014

Adrian SuttonFrom Java to Swift

Ever since the public beta of OS X I’ve been meaning to get around to learning Objective-C but for one reason or another never found a real reason to. I’ve picked up bits and pieces of it and even written a couple of working utilities but those were pretty much entirely copy/paste from various sources. Essentially they were small enough and short-lived enough that I only needed the barest grasp of Objective-C syntax and no understanding of the core philosophies and idioms that really make a language what it is. This is probably best exemplified by the approach to memory management those utilities took: it won’t run for long, so just let it leak.

I do however have a ton of experience in Java and JavaScript plus knowledge and experience in a bunch of other languages to a wide range of extents. In other words, I’m not a complete moron, I’m just a complete moron with Objective-C.

Anyway, obviously when Swift was announced it was complete justification for my ignoring Objective-C all these years and interesting enough for me to actually get around to learning it.

So I’ve been building a very small little utility in swift so I can pull out information from OS X’s system calendar from the command line and push it around to various places that I happen to want it and can’t otherwise get it. The code is up on GitHub if you’re interested – code reviews and patches most welcome. It’s been a great little project to get used to Swift the language without spending too much time trying to learn all the OS X APIs.

Language Features

Swift has some really nice language features that make dealing with common scenarios simple and clear. Unlike many languages it doesn’t seem to go too far with that though – it doesn’t seem likely that people will abuse its features and create overly complex or overly succinct code.

My favourite feature is the built-in optional support. A variable of type String is guaranteed to not be null, a variable of type String? might be. You can’t call any methods from String on a String? variable directly you have to unwrap it first – confirming it isn’t null. That would be painful if it weren’t for the ‘if let’ construct:

let events: String? = ""
if
let events = events { events.utf16count()
}

I’ve dropped into a habit here which might be a bit overly clever – naming the unwrapped variable the same as the wrapped one. The main reason for this is that I can never think of a better name. I figure it’s much like have a if events != nil check.

APIs

Calling Swift a new language is correct but it would almost be more accurate to call it a new syntax instead. Swift does have its own core API which is unique to it, but that’s very limited. For the most part you’re actually dealing with the OS X (or iOS) APIs which are shared with Objective-C. Thus, people with experience developing in Objective-C quite obviously have a huge head start with Swift.

The other impact of sharing so many APIs with Objective-C is that some of the benefits of Swift get lost – especially around strict type checking and null reference safety. For example retrieving a list of calendar events is done via the EventKit method:

func eventsMatchingPredicate(predicate: NSPredicate!) -> [AnyObject]!

which is helpfully displayed inside Xcode using Swift syntax despite it being a pre-existing Objective-C API and almost certainly still implemented in Objective-C. However if you look at the return type you see the downside of inheriting the Objective-C APIs: the method documentation says it returns [EKEvent] but the actual declaration is [AnyObject]!  So we’ve lost both type safety and null reference safety because Objective-C doesn’t have non-nullable references or generic arrays. It’s not a massive loss because those APIs are well tested and quite stable so we’re extremely unlikely to be surprised by a null reference or unexpected type in the array, but it does require some casting in our code and requires humans to read documentation and check if something can be null rather than having the compiler do it for us.

If Swift were intended to be a language that competes with Java, python or ruby the legacy of the Objective-C APIs would be a real problem. However, Swift is designed specifically to work with those APIs, to be a relatively small but powerful step of OS X and iOS developers. In that context the legacy APIs are really just a small bump in the road that will smooth out over time.

Xcode

The other really big thing a Java developer notices switching to Swift is what a Java developer notices when switching to any other language – the tools suck. In particular, the Java IDEs are superb these days and make writing, navigating and refactoring code so much easier. Xcode doesn’t even come close. The available refactorings are quite primitive even for C and Objective-C and they aren’t supported at all for Swift.

The various project settings and preferences in Xcode are also a complete mystery – and judging from the various questions and explanations on the internet it doesn’t seem to be all that much clearer even to people with lots of experience. In reality I doubt its really much different to Java which also has a ridiculous amount of complexity in IDE settings. The big difference is that in the Java world you (hopefully) start out by learning the basics using the standard command line tools directly. Doing so gives you a good understanding of the build and runtime setup and makes it much clearer what is controlling how your software is built and what is just setting up IDE preferences. Xcode does provide a full suite of command line developer tools so hopefully I can learn more about them and get that basic understanding.

Finally, Xcode 6 beta 3 is horribly buggy. It’s a beta so I can forgive that but I’m surprised at just how bad it is even for a beta.

Cocoa Pods

This was a delight to stumble across.  Adding a dependency to a project was a daunting prospect in Xcode (jar files are surprisingly brilliant). I really don’t know what it did but it worked and I’m grateful. Libraries that aren’t available as pods are pretty much dead to me now. There does seem to be a pretty impressive array of libraries available for a wide range of tasks. Currently all of them are Objective-C libraries so you have to be able to understand Objective-C headers and examples and convert them to Swift but it’s not terribly difficult (and trivial for anyone with an Objective-C background).

Overall

Swift has a good feel about it – lots of neat features that keep code succinct. Also it’s very hard not to like strict type checking with good type inference. With Apple pushing Swift as a replacement for Objective-C over time the libraries and APIs will become more and more “Swift-like”. Xcode should improve to at least offer the basic refactorings it has for other languages and stabilise which will make it a workable IDE – exceeding the capabilities of what’s available for a lot of languages.

 

Most importantly, the vast majority of existing Objective-C developers seem to quite like it – plenty of issues raised as well, but overall generally positive.

I think the future for Swift looks bright.

July 20, 2014 11:47 AM

July 18, 2014

Ben FowlerHow to rick-roll your friends with video brochures

My employers recently distributed a promotional video to everybody in the company in the form of video brochures, as part of their big rebranding effort. My puerile sense of humour ensured that I had to have some fun with this cheap, but quit neat piece of hardware. "Video brochures" belong to a class of devices known as s1 mp3 video players, which includes most cheap Chinese MP3 audio and video

July 18, 2014 07:13 PM

July 17, 2014

Blue HackersAdverse Childhood Exprience (ACE) questionnaire | acestoohigh.com

NOTE: the links referred to in this post may contain triggers. Make sure you have appropriate support available.

http://acestoohigh.com/got-your-ace-score/

There are 10 types of childhood trauma measured in the ACE Study, personal as well as ones related to other family members. Once you have your score, there are many useful insights later in the article.

The origin of this study was actually in an obesity clinic.

July 17, 2014 08:45 AM

Blue HackersHarvard 75 year longitudinal study released re men and happy lives

July 17, 2014 01:28 AM

July 15, 2014

Daniel DevineMildly Useful jQuery Plugins

Over the last few years I have written some mildly useful jQuery plugins. I've had them published on BitBucket but I've never really announced them. Here they are.

Read more…

July 15, 2014 06:36 AM

Ben MartinHookup wires can connect themselves...

A test of precision movement of the Lynxmotion AL5D robot arm, seeing if it could pluck a hookup wire from a whiteboard and insert it into an Arduino Uno. The result: yes it certainly can! To be able to go from Fritzing layout file to automatic real world jumper setup wires would have to be inserted in a specific ordering so that the gripper could overhang the unwired part of the header as it went along.

<iframe allowfullscreen="" frameborder="0" height="281" mozallowfullscreen="" src="http://player.vimeo.com/video/100631767" webkitallowfullscreen="" width="500"></iframe>
Lynxmotion AL5D moving a jumper to an Arduino. from Ben Martin on Vimeo.

July 15, 2014 04:22 AM

July 08, 2014

Blue Hackers7 Things to Remember When You Think You’re Not Good Enough

July 08, 2014 03:01 AM

July 06, 2014

Ben MartinEnding up Small

It's easy to get swept up in trying to build a robot that has autonomy and the proximity, environment detection and inference, and feedback mechanisms that go along with that. There's something to be said about the fun of a direct drive robot with just power control and no feedback. So Tiny Tim was born! For reference, his wheels are 4 inches in diameter.


A Uno is used with an analog joystick shield to drive Tim. He is not intended and probably never will be able to move autonomously. Direct command only. Packets are sent over a wireless link from the controller (5v) to Tim (3v3). Onboard Tim is an 8Mhz/3v3 Pro Micro which I got back on Arduino day :) The motors are driven by a 1A Dual TB6612FNG Motor Driver which is operated very much like an L298 dual hbridge (2 direction pins and a PWM). Tim's Pro Micro also talks to an OLED screen and his wireless board is put out behind the battery to try to isolate it a little from the metal. The OLED screen needed 3v3 signals, so Tim became a 3v3 logic robot.

He is missing a hub mount, so the wheel on the left is just sitting on the mini gearmotor's shaft. At the other end of the channel is a Tamiya Omni Wheel which tilts the body slightly forward. I've put the battery at the back to try to make sure he doesn't flip over during hard break.

A custom PCB would remove most of the wires on Tim and be more robust. But most of Tim is currently put together from random bits that were available. The channel and beams should have a warning letting you know what can happen once you bolt a few of them together ;)

July 06, 2014 04:27 AM

July 05, 2014

Daniel DevineEncrypted Backup to Amazon S3 (2014)

The following is a guide on how to do encrypted backups to Amazon S3. This is an updated version of my older system which used the Dt-S3-Backup script but I've decided to try a replacement which is available from the EPEL repository - duply. This guide has been updated for and tested on CentOS 6.

Read more…

July 05, 2014 05:30 AM

July 01, 2014

Ben MartinThe long road to an Autonomous Vehicle.

Some people play tennis, some try to build autonomous wheeled robots. I do the later and the result has come to be known as “Terry”. In the long road toward that goal what started as a direct drive, “give the motor a PWM of X% of the total power”, lead to having encoder feedback so that the wheel could be turned an exact amount over a given time.  This has meant that the control interface no longer uses a direct power and direction slider but has buttons to perform specific motion tasks. The button with the road icon and a 1 will move the wheels forward a single rotation to advance Terry 6π inches forwards (6 inch wheels).

The red tinted buttons use the wheel encoders to provide precision movement. Enc left and Enc right turn Terry in place (one wheel forward, one wheel backwards). The Circus and Oval perform patrol like maneuvers. Circus moves forward, turns in place, returns back, and turns around again. It's like a strafing patrol. On the other hand, the Oval turns move like a regular car, with both wheels going forward but one wheel moving slower than the other to create the turning effect. The pan and tilt control the on board camera for a good Terry eye view of the world.

I added two way web socket communications to Terry this week. So now the main battery voltage is updated and the current, err, current for each motor is shown at the bottom of the page. The two white boxes there give version information so you can see if the data is stale or not. I suspect a little async javascript on some model value will be coming so that items on the page will darken as they become stale due to lag on Terry or communications loss.

I've been researching proximity detection and will be integrating a Kinect for longer distance measures soon. Having good reliable distance measurements from Terry to objects is the first step in getting him to create maps of the environment for autonomous navigation.

July 01, 2014 01:22 PM

June 26, 2014

Ben MartinAtmel Atmega1284

I started tinkering with the Atmega1284. Among other things it gives you an expansive 16kb of SRAM, 2 uarts and of course looking at the chip a bunch more IO. A huge plus is that you can get a nice small SMD version and this 40 pin DIP monster with the same 1284. Yay for breadboard prototypers who don't oven bake each board configuration! The angle of photo seems to include the interesting bits. Just ignore the two opamps on the far right :)


I had trouble getting this to work with a ceramic resonator. The two xtal lines are right next to each other with ground just above but the 3 pins on the resonator were always a bit hard to get into the right configuration for these lines. Switching over to a real crystal and 22pF caps I got things to work. The symptoms I was having with the resonator included non reproducibility, sometimes things seemed to upload sometimes not.  Also, make sure the DTR line is going though a cap to the reset pullup resistor. See the wiring just to the right of the 1284.

I haven't adapted the Arudino makefile to work on this yet, so unfortunately I still have to upload programs using the official IDE. I have the makefile compiling for the 1284 but das blinken doesn't work when I "make upload".

June 26, 2014 12:25 AM

June 20, 2014

Ben Martin3d printed part repo for attaching to Actobotics structure

I started a github repo for 3d print source files to attach things to Actobotics structure. First up is a mount to secure a rotary encoder directly to some channel as shown below.


As noted in the readme, I found a few issues post print, so had to switch to subtractive modelling (dremel time) to complete it. I leant a few things from this though so the issues are less likely to bite me in the future. There are some comments in the scad file so hopefully it will be useful to others and maybe one day I'll get a PR to update it. The README file in the repo will probably be the one true source for update info and standing issues with each scad file.

You can just see the bolt in below the rotary encoder. That has a little inset in the 3d part to stop it from free spinning when you screw in the 6-32 nut from below the channel.

June 20, 2014 12:54 AM

June 16, 2014

Ben MartinArduino and wireless networking

The nRF24L01 module allows cheap networking for Arduinos. Other than VCC which it wants at 3v3 it uses SPI (4 wires) and a few auxiliary wires, one for an interrupt line. One of the libraries to drive these chips is the RF24. Packets can be up to 32 bytes in length, and by default attempts to send less than 32 bytes result in sending a whole 32 byte block. Attempts to send more than 32 bytes seem to result in only the first 32 being sent. There is a CRC which is a 2 byte and is on by default.

For doing some IoT things using a HMAC might be much better choice than just CRC. The major advantage being that one can tell that something coming over a wireless link was from the expected source. Of course, if somebody has physical access to the arduinos then they can clone the HMAC key, but one has to define what they want from the system. The HMAC provides a fairly good assurance that sensor data is coming from the arduino you think it is coming from rather than somebody trying to send fake data. Another rather cute benefit is that the system doesn't accept junk data attacks. If your HMAC doesn't match the packet doesn't get evaluated by the higher level software.

Using a Sha256 HMAC the return ping times go from 50 to 150ms. The doubling can be explained by the network traffic, as the HMAC is 32 bytes and will double the amount of traffic. I think perhaps the extra 50ms goes into hashing but I'll have to measure that more specifically to find out.

Once I clean up the code a little I'll probably push it to github. A new RF24HMAC class delegates to the existing RF24 class and sends a HMAC packet right after the user data packet for you. The interface is similar, I'm adding some writenum() calls which I might make more like nodejs buffer. Once you have added the user data packet call done() to send everything including the HMAC packet.

RF24 radio(9,10);
RF24HMAC radiomac( radio, "wonderful key" );
radiomac.beginWrite();
radiomac.writeu32( time );
bool ok = radiomac.done();


The receiving end boils down to getting a "packet" which is really just the 32 bytes of user data that was sent. The one readAuthenticatedPacket() call actually gets 2 packets off the wire, the user data packet and the HMAC packet. If the hmac calculated locally for the user data packet does not match the HMAC that was received from the other end then you get a null pointer back from readAuthenticatedPacket(). The data is either authenticated or you don't get to see any of it.

RF24HMAC radiomac( radio, "wonderful key" );
uint8_t* packetData = 0;
if( packetData = radiomac.readAuthenticatedPacket() )
{
     int di = 0;
     uint32_t v = readu32( packetData, di );
     got_time = v;
     printf("Got payload %lu...\n\r",got_time);
}


Oh yeah, I also found this rather cute code to output hex while sniffing around.

June 16, 2014 02:06 AM

June 09, 2014

Blue HackersWorkplace Bullying

<iframe allowfullscreen="allowfullscreen" frameborder="0" height="281" src="https://www.youtube.com/embed/wAgg32weT80?feature=oembed" width="500"></iframe>

June 09, 2014 10:20 AM

June 03, 2014

Adrian SuttonSwift

Apple have released Swift, their new programming language – designed to be familiar to Objective-C programmers and work well with the existing Cocoa frameworks. It’s far too soon to make substantial judgements about the language – that can only come after actually using it in real projects for some time. However, there’s nothing that stands out as incredibly broken, so with Apple’s backing it’s extremely unlikely that it won’t become a very commonly used language. After all, there’s plenty wrong with every other programming language and we manage to make do with them.

What I find most promising about it though is that many language design choices are justified by them preventing common causes of bugs in Objective-C or C (and many other languages).  For example:

“The cases of a switch statement do not “fall through” to the next case in Swift, avoiding common C errors caused by missing break statements.”

Excerpt From: Apple Inc. “The Swift Programming Language.” iBooks

They’ve replaced many common checkstyle/lint errors with better language design that the mistakes impossible (what a good idea). It could be argued that they could have taken more extreme approaches to find solutions or prevented more sources of errors with additional cleverness but my initial take is that it has likely found a good balance between fixing common causes of bugs while still being familiar to Objective-C coders (it’s target audience) and working well with the existing frameworks and libraries.

We’ll likely find plenty to complain about, as always, but overall I suspect it will be a very nice language to work with.

June 03, 2014 08:49 AM

Ben MartinTerryTorial: Wheel Feedback, Object detection, Larger size, and now a screen!

Terry the robot now has grown up a little, both semantically and physically. The 12 inch channel that was the link to the rear swivel wheel now has many friends which are up to 15 inches higher than the base beam. This places Terry's main pan/tilt camera at his top a little bit above table height, so he is far more of a presence than he used to be. On the upside, the wheel controller, shaft encoders, and batteries can all live on the lower level and there is more room for the actual core of Terry up top. I'm likely to add one or two more shelves to the middle of Terry for more controllers etc.


There is now also a 16x32 RGB matrix screen up front so Terry can tell you what he is thinking, what speed his wheels are rotating at, and if there is an obstacle that has been detected.

I'm rendering the framebuffer for the screen on the BeagleBone Black using Cairo and Pango (thus & freetype). The image data is extracted from Cairo as 32bit RGBA and packed down into planar data that the arduino expects. That packed framebuffer is then sent over UART to an Arduino 328 which takes care of refreshing the RGB matrix so that fake colour levels are achieved using a software PWM/BAM implementation. Yay for the BBB having 4.5 UARTs. I need to work out how to bring up the TX only UART as that is perfect for the one way communication used to drive the screen.

I should be able to get better colour precision using a Teensy 3.x as the screen driver. The Cortex-M4 just has more cycles to be able to softpwm the screen faster to be able to get a greater perceived colour range.

I customized the font a little in FontForge. Specifically I modified the kerning for "Te" to not waste as much space. The font is based on Cantarell by Dave Crossland. I've only just started on this open font to LED matrix stuff, but with some form of PWM and FreeType rendering the data I hope to be able to get nice antialiased font renders, even at the low resolution of 16x32.

The wheel encoders make a huge difference to the whole experience. Even without full autonomy the encoders allow you to have a repeatable oval or patrol path which can be followed. Also being able to setup the speed to avoid large acceleration or jerky movement so that the higher Terry is still a very stable Terry when on the move.

June 03, 2014 04:50 AM

May 28, 2014

Adrian SuttonStatic and Dynamic Languages

Did you know that it’s perfectly fine to enjoy programming in both static and dynamic languages?

— Honza Pokorny (@_honza) May 26, 2014

I do a lot of coding in Java and JavaScript and it’s never bothered me that one has static types and one dynamic (and I’ve used plenty of other languages from both camps as well – it amuses me slightly that the two I use most often tend to be viewed as the worst examples of each type). I can see pros and cons of both approaches – static typing detects a bunch of common errors sooner which saves time but there are inevitably times where you wind up just fighting the type system which wastes time again. Better statically typed languages waste less of your time but at some point they all cause pain that’s avoided with dynamic languages. It winds up coming down largely to personal preference and is a little affected by what you’re building.

What I have learnt however is that using static and dynamic languages too closely is a recipe for pain and frustration. LMAX develops the main exchange in Java and recently started using Spock for unit tests. Spock is amazing in many, many ways but it comes with the Groovy programming language which is dynamically typed. There isn’t, or perhaps shouldn’t be, any tighter coupling than a class and its unit test so the combination of static and dynamic typing in this case is extremely frustrating.

It turns out that the way I work with languages differs depending on the type system – or rather on the tools available but they are largely affected by the type system. In Java I rename methods without a second thought, safe in the knowledge that the IDE will find all references and rename them. Similarly I can add a parameter to an API and then let the compiler find all the places I need to update. Once there are unit tests in Groovy however neither of those options is completely safe (and using the compiler to find errors is a complete non-starter).

When writing JavaScript I’m not under any illusion that the IDE will find every possible reference for me so I approach the problem differently. I explore using searches rather than compiler errors and turn to running tests to identify problems more quickly because the compiler isn’t there to help. I also tend to write tests different, expecting that they will need to cover things that the compiler would otherwise have picked up. The system design also changes subtly to make this all work better.

With static and dynamic typing too closely mixed, the expectations and approaches to development become muddled and it winds up being a worst-of-both-worlds approach. I can’t count on the compiler helping me out anymore but I still have to spend time making it happy.

That doesn’t mean that static and dynamic languages can’t co-exist on the same project successfully, just that they need a clearly defined, and relatively stable, interface between them. The most common example being static typing on the server and JavaScript in the browser, in which case the API acts as a buffer between the two. It could just as easily be server to server communication or a defined module API between the two though and still work.

May 28, 2014 02:48 AM

May 27, 2014

Adrian SuttonAutomated Tests Are a Code Smell

Writing automated tests to prove software works correctly is now well established and relying solely or even primarily on manual testing is considered a “very bad sign”. A comprehensive automated test suite gives us a great deal of confidence that if we break something we’ll find out before it hits production.

Despite that, automated tests shouldn’t be our first line of defence against things going wrong. Sure they’re powerful, but all they can do is point out that something is broken, they can’t do anything to prevent it being broken in the first place.

So when we write tests, we should be asking ourselves, can I prevent this problem from happening in the first place? Is there a different design which makes it impossible for this problem to happen?

For example, checkstyle has support for import control, allowing you to write assertions about what different packages can depend on. So package A can use package B but package C can’t. If you’re concerned about package structure it makes a fair bit of sense. Except that it’s a form of testing and the feedback comes late in the cycle. Much better would be to split the code into separate source trees so that the restrictions are made explicit to the compiler and IDE. That way autocomplete won’t offer suggestions from forbidden packages and the code won’t compile if you use them. It is therefore much harder to do the wrong thing and feedback comes much sooner.

Each time we write an automated test, we’re admitting that there is a reasonable likelihood that someone could mistakenly break it. In the majority of cases an automated test is the best we can do, but we should be on the look out for opportunities replace automated tests with algorithms, designs or tools that eliminate those mistakes in the first place or at least identify them earlier.

 

May 27, 2014 03:09 AM

May 26, 2014

Ben MartinTo beep or not to beep.

There comes a time in the life of any semi autonomous robot when the study of the classics occurs. For Terry, the BeagleBone Black driven Actobotics construction, short of visiting the Globe the answer would seem to be at current "To Beep".



Terry is now run by a RoboClaw 2x5A controller board with two 1024 pulse per revolution shaft encoders attached to an 8:1 ratio gearing on the wheel. This gives an overall of about 13 bits of precision from the shaft encoders relative to the wheel. In the future I can use this magnificent 4 inch alloy gear wheel as a torque multiplier by moving the motor to its own tooth pinion gear. Terry now also has an IR distance sensor out front, and given the 512mb of RAM on the BeagleBone some fairly interesting mapping can be built up by just driving around and storing readings. Distance + accurate shaft encoders for the win?

To use the RoboClaw from the BeagleBone Black I created a little nodejs class. It's not fully complete yet, but it is already somewhat useful. I found creating the nodejs interesting because when coding for MCUs one gets quite used to the low level style of sending bytes and stalling until the reply comes in. For nodejs such a style doesn't implement well. Normally one would see something like

doMagic( 7, rabbits, function() {
    console.log("got them rabbitses, we did");
});

Where the callback function is performed when the task of pulling the number of rabbits from the hat is complete. For using the RoboClaw you might have something like

claw.getVersion( function( v ) {
  console.log("version:" + v.value );
});
claw.setQPPS( 1024 );

Which all seems fairly innocent. The call to getVersion will send the VERSION command to the roboclaw with a given address (you can have many claws on the same bus). The claw should then give back a version string and a checksum byte. The checksum can be stripped off and verified by the nodejs.

The trouble is that the second call, to set of the quadratic decoder pulses per second, will start to happen before the RoboClaw got a chance to service the getVersion call. Trying to write a second command before you have the full reply from the first is a recipe for disaster. So the nodejs RoboClaw class maintains a queue of requests. The setQPPS() is not run right away, but enqueued for later. Once the bytes come back over the UART in response to the getVersion() then the callback is run and the RoboClaw nodejs class then picks the next command (setQPPS) and sends that to the RoboClaw. This way you get the strict ordered serial IO that the RoboClaw hardware is needing, but the nodejs can execute many commands without any stalling. Each command is handled in the order it is written in the nodejs source code, it just doesn't execute right away.

Next up is trying to tune the PID controller variables for the robot. One might expect a command such as "move 1 wheel circumference forward" to work fairly exactly if the quality of the shaft encoders is good. Though for the wheel size and QPPS I get a little bit of overshoot currently. I suspect the current D is not sufficient to pull it up in time. It may be a non issue when the wheels are down and friction helps the slow down process.

Terry now also has an IMU onboard. I created a small class to read from the TWI HMC5883 magneto. You'll have to figure your declination and I have a small hack on about line 100 to alter the heading so that a 0 degree reading means that physically Terry is facing North. I should bring that parameter out of the class so that its easier to adjust for other robots that might mount their IMU in a different physically orientation than what I happened to use.

May 26, 2014 08:07 AM

May 25, 2014

Adrian SuttonFinding Balance with the SOLID Principles

Kevlin Henney’s talk from YOW is a great deconstruction of the SOLID principles, delving into what they really say as opposed to what people think they say, and what we can learn from them in a nuanced, balanced way.

Far too often we take these rules as absolutes under the mistaken assumption that if we always follow them, our software will be more maintainable. As an industry we seem to be seeking a silver bullet for architecture that solves all our problems just like we’ve searched for a silver bullet language for so long. There is value in these principles and we should take the time to learn from them, but they are just tools in our toolkit that we need to learn not only how to use, but when.

May 25, 2014 10:09 PM

May 24, 2014

Adrian SuttonDon’t Get Around Much Anymore

Human Resource Executive Online has an interesting story related to my previous entry:

Ask yourself this question: If you were offered a new job in another city where you have no ties or networks, and you suspected that the job would probably not last more than three years (which is a good guess), how much of a raise would they have to give you to get you to move? Remember, you’d likely be trying to find a new job in a place where you don’t have any connections when that job ended. I suspect it would be a lot more than most employers are willing to pay.

Again though this should be driving up the number of remote jobs, but where are they all?

May 24, 2014 08:14 AM

Adrian SuttonWhere Are All The Remote Jobs?

Photo by Citrix Online CC BY-NC-ND 2.0

Workshifting by Citrix Online CC BY-NC-ND 2.0

It seems like forever that people have been arguing that remote working is the future. Why limit yourself to just the people who live near your office when hiring? Why suffer the distractions of an office when you could define your own environment in your very own home? Why waste employee time commuting? Surely by now every employee would want to be working from wherever they choose and employers would be forced to accept it, if not actively encourage it to save on office costs and get the best employees.

Yet somehow despite all the hype and promise there seem to be exceedingly few jobs around that allow for remote work. The fastest way I know of to ensure a recruiter never contacts me again is to simply mention that I’m based in Brisbane Australia and have no interest in moving. That greatly surprises me – especially in the technology industry.

When I do see jobs that support remote work it’s almost always for companies that primarily work on open source code or companies that primarily hire employees based on either their involvement in the company’s community or their other open source contributions.

So perhaps one of the big reasons employers aren’t adopting remote work as much is the difficulty in hiring the right people, training them and integrating them into the company and evaluating if they’re actually working out. Remote working involves a significant amount of trust between employer and employee. New employees generally haven’t had a chance to build up that trust unless they were already contributing to the companies community or demonstrating their contributions in open source. Unfortunately looking at open source contributions excludes any developer that enjoys their job so much they spend all their spare coding time on that and don’t currently work in a job that involves a lot of open source.

That fits with my current situation – for the last two and a bit years I’ve had the luxury of working from home, but it only happened because I originally worked in the London office and then resigned to move home. Fortunately they were keen to keep me so I wound up staying on and working from home. Later a second developer went through the same process and now works from New Zealand.

Once trust and culture is established we seem to have plenty of tools to make remote working extremely effective. If remote working is really the future though, we’ll need to find better ways to establish that initial trust and on-board new employees.

Are there hordes of remote working jobs out there that I just don’t hear about? If not, what’s stopping it from taking off?

May 24, 2014 04:38 AM

May 13, 2014

Blue HackersKids Matter – mental health for schoolkids

The KidsMatter site is funded by the Australian Commonwealth Department of Health. There is a specific section for primary schools, KidsMatter Primary.

Seems like an excellent initiative that’s been going for some years already, but not every school will be involved with it yet (cost shouldn’t be a hindrance, it’s free).

So please take a look and mention it to your contacts (principal, P&C, teacher) at your kids’ school!

May 13, 2014 10:34 AM

May 12, 2014

Adrian SuttonCombining Output of Multiple Bash Commands Into One Line

I wanted to create a CSV file showing the number of JUnit tests in our codebase vs the number of Spock tests over time. I can count the number of tests, along with the revision pretty easily with:

git svn find-rev HEAD
find src/test -name '*Test.java' | wc -l
find src/test -name '*Spec.groovy' | wc -l

But that outputs the results on three separate lines and I really want them on one line with some separator (comma, space or tab). The simplest way to achieve that is:

echo "$(git svn find-rev HEAD), $(find src/test -name '*Test.java' | wc -l), $(find src/test -name '*Spec.groovy' | wc -l)"

It’s also very handy to remember that echo has a -n option which omits the trailing new line, so:

echo -n 'Revision: ' && git svn find-rev HEAD

outputs:

Revision: 55450

May 12, 2014 11:55 PM

May 10, 2014

Ben MartinGoing from Arduino TWI to BeagleBone Black Bonescript

It's handy for a robot to know what orientation it is holding, and for that a Magnetometer can give you where north is at relative to the chip orientation. For this I used the HMC5883. I started out with a cut down minimal version of code running on Arduino and then started porting that over to Bonescript for the BeagleBone Black to execute.

The HMC5883 only needs power (3v3), ground, and the two wires of a TWI.

One of the larger API changes of the port is the Arduino line:

Wire.requestFrom( HMC5883_ADDRESS_MAG, 6 );

Which takes the TWI address of the chip you want 6 bytes from. It has to be prefixed by another TWI transmission to set the read byte address to start from where you would like, in this case HMC5883_REGISTER_MAG_OUT_X_H_M = 0x03.

On the Bonescript side this becomes the following. The TWI address isn't used because the wire object already knows that, but you pass the register address to start reading from directly to the request to read bytes:


self.wire.readBytes( HMC5883_REGISTER_MAG_OUT_X_H_M,  
BUFFER_SIZE, 
function(err, res) { ... });

I pushed the code up to github bbbMagnetometerHMC5883. Note that getting "north" requires both a proper declination angle for your location and also some modification to allow for how your magneto chip is physically oriented on your robot. Actual use of the chip is really easy with the async style of nodejs.

May 10, 2014 11:52 PM

May 07, 2014

Adrian SuttonThe Single Responsibility Principle (and what’s wrong with it)

Marco Cecconi - I don’t love the single responsibility principle:
The principle is arbitrary in itself. What makes one and only Reason To Change™ always, unequivocally better than two Reasons To Change™? The number one sounds great, but I’m a strong advocate of simple things and sometimes a class with more reasons to change is the simplest thing. I agree that we mustn’t do mega-classes that try to do too many different things, but why should a class have one single Reason To Change™? I flatly disagree with the statement. It’s not supported by any fact. I could say “three” reasons and be just as arbitrary.
Interesting critique of the single responsibility pattern. The quote above is perhaps one of the weaker arguments against the single responsibility pattern but it struck a nerve with me because all too often we assume that because a rule sounds good it has some validity. In reality, picking one responsibility for a class is quite arbitrary and, as the article argues, deciding what a responsibility is is even more arbitrary.
So, whilst I agree completely on the premise of not make ginormous classes, I think this principle is not at all helpful in either illustrating the concept or even identifying unequivocally problematic cases so they can be corrected. In the same way, anemic micro-classes that do little are a very complicated way of organizing a code base.
Given I’ve spent time today fixing a bug that was almost entirely caused by having too many classes with too focused responsibilities which caused the combined structure to be difficult to reason about and have unintended consequences this resonates strongly as well. Making responsibilities too fine grained is just as detrimental to the maintainability of code and somewhat counterintuitively also to it’s reusability. Reusing a single class is far simpler than having to work out how to piece together ten separate classes in the right way (hence the existence of builders which create a facade that everything is really a single unit again). I can see a lot of value in discussing things in terms of coupling and cohesion rather than responsibilities and using those concepts to find the right balance.

May 07, 2014 04:25 AM

April 29, 2014

Ben MartinActobotics and ServoCity, addictive Robot fun!

I have had the great fortune to play around with some of the Actobotics components. The main downside I've seen so far is that its highly addictive!


What started out as a 3 wheel robot base then obtained a Web interface and an on board access point to control the speed and steering of the beast. The access point runs openWRT (now) and so with the Beagle there are two Linux machines on board!

The pan and tilt is also controlled via the Web interface and SVGA video streams over the access point from the top mounted web camera. There is a Lithium battery mounted in channel below the BeagleBone Black which powers it, the camera, and the TPLink access point. That block of 8 AA rechargables is currently tasked with turning the motors and servo.

It's early days yet for the robot. With a gyro I can make the pan gearmotor software limited so it doesn't wrap the cables around by turning multiple revolutions. Some feedback from the wheel gearmotors, location awareness and ultrasound etc will start to make commands like "go to the kitchen" more of a possibility.

April 29, 2014 12:52 PM

April 21, 2014

Clinton RoyPyCon Australia Call for Proposals closes Friday 25th April

There’s less than a week left to get your proposal in for PyCon Australia 2014, Australia’s national Python Conference. We focus on first time speakers so please get in touch if you have any questions. The full details are available at http://2014.pycon-au.org/cfp

 

 

 


Filed under: Uncategorized Tagged: pyconau

April 21, 2014 09:21 AM

April 17, 2014

Ben FowlerBad HDMI cables and CEC

Fig: nasty HDMI cable from Maplin that apparently doesn't support CEC I got myself a Raspberry Pi (a tiny, £30 hacker-friendly computer), and then set it up with RaspXBMC to stream my movie rips off the house file server to my Samsung 6-Series TV. Now, you're supposed to be able to use the TV remote to drive XBMC and it didn't work initially. No matter how hard I tried, I simply could not

April 17, 2014 09:26 PM

April 14, 2014

Adrian SuttonTesting@LMAX – Testing in Live

Previously in the Testing@LMAX series I’ve mentioned the way we’ve provided isolation between tests, allowing us to run them in parallel. That isolation extends all the way up to supporting a multi-tenancy module called venues which allows us to essentially run multiple, functionally separate exchanges on a single deployment of the LMAX Exchange.

We use the isolation of venues to reduce the amount of hardware we need to run our three separate liquidity pools (LMAX Professional, LMAX Institutional and LMAX Interbank), but that’s not all. We actually use the isolation venues provide to extend our testing all the way into production.

We have a subset of our acceptance tests which, using venues, are run against the exchange as it is deployed in production, using the same APIs our clients, MTF members and internal staff would use to test that the exchange is fully functional. We have an additional venue on each deployment of the exchange that is used to run these tests. The tests connect to the exchange via the same gateways as our clients (FIX, web, etc) and place real trades that match using the exact same systems and code paths as in the “real” venues. Code-wise there’s nothing special about the test venue, it just so happens that the only external parties that ever connect to it are our testing framework.

We don’t run our full suite of acceptance tests against the live exchange due to the time that would take and to ensure that we don’t affect the performance or latency of the exchange. Plus, we already know the code works correctly because it’s already run through continuous integration. Testing in live is focussed on verifying that the various components of the exchange are hooked up correctly and that the deployment process worked correctly. As such we’ve selected a subset of our tests that exercise the key functions of each of the services that make up the exchange. This includes things like testing that an MTF member can connect and provide prices, that clients can connect via either FIX or web and place orders that match against those prices and that the activity in the exchange is reported out correctly via trade reporting and market data feeds.

We run testing in live as an automated step at the start of our release, prior to making any changes, and again at the end of the release to ensure the release worked properly. If testing in live fails we roll back the release. We also run it automatically throughout the day as one part of our monitoring system, and it is also run manually whenever manual work is done or whenever there is any concern for how the exchange is functioning.

While we have quite a lot of other monitoring systems, the ability to run active monitoring like this against the production exchange, and go as far as actions that change state gives us a significant boost in confidence that everything is working as it should, and helps isolate problems more quickly when things aren’t.

April 14, 2014 12:36 PM

April 12, 2014

Adrian SuttonTesting@LMAX – Test Isolation

One of the most common reasons people avoid writing end-to-end acceptance tests is how difficult it is to make them run fast. Primary amongst this is the time required to start up the entire service and shut it down again. At LMAX with the full exchange consisting of a large number of different services, multiple databases and other components start up is far too slow to be done for each test so our acceptance tests are designed to run against the same server instance and not interfere with each other.

Most functions in the exchange can be isolated by simply creating one or more accounts and instruments that are unique to the particular test. The test simply starts by using the usual administration APIs to create the users and instruments it needs. Just this basic level of isolation allows us to test a huge amount of the exchange’s functionality – all of the matching behaviour for example. With the tests completely isolated from each other we can run them in parallel against the same server and dramatically reduce the amount of hardware required to run all the acceptance tests in a reasonable time.

But there are many functions in an exchange that cut across instruments and accounts – for example the exchange rates used to convert the profit or loss a user earns in one currency back to their account base currency. Initially tests that needed to control exchange rates could only be run sequentially, each one taking over the entire exchange while they ran and significantly increasing the time required for the test run. More recently however we’ve made the concept of currency completely generic – tests now simply create a unique currencies they can use and are able to set the exchange rates between those currencies without affecting any other tests. This makes our acceptance tests run significantly faster, but also means new currencies can be supported in the exchange without any code changes – just use the administration UI to create the desired currency.

We’ve applied the same approach of creating a completely generic solution even when there is a known set of values in a range of other areas, giving us better test isolation and often making it easier to respond to unexpected future requirements. Sometimes this adds complexity to the code or administration options which could have been avoided but the increased testability is well worth it.

The ultimate level of test isolation however is our support for multiple venues running in a single instance of the exchange. This essentially moves the exchange to a multi-tenancy model, a venue encapsulates all aspects of an exchange, allowing us to test back office reports that track how money moves around the exchange, reconciliation reports that cover all trades and many other functions that report on the state of the exchange as a whole.

With the LMAX Exchange now essentially running in three forms (Professional, Institutional and Interbank) this support for venues is more than just an optimisation for tests – we can co-host different instances of the exchange on the same production hardware, reducing not only the upfront investment required but also the ongoing maintenance costs.

Overall we’ve seen that making features more easily testable (using end-to-end acceptance tests) surprisingly often delivers business benefit making the investment well worth it.

April 12, 2014 12:36 PM

April 06, 2014

Blue HackersShift workers beware: Sleep loss may cause brain damage, new research says

April 06, 2014 08:48 AM

April 01, 2014

Blue HackersStudents and Mental Health at University

The Guardian is collecting experiences from students regarding mental health at university. I must have missed this item earlier as there are only a few days left now to get your contribution in. Please take a look and put in your thoughts!

It’s always excellent to see mental health discussed. It helps us and society as a whole.

April 01, 2014 11:34 PM

Adrian SuttonTesting@LMAX – Time Travel and the TARDIS

Testing time related functions is always a challenge – generally it involves adding some form of abstraction over the system clock which can then be stubbed, mocked or otherwise controlled by unit tests in order to test the functionality. At LMAX we like the confidence that end-to-end acceptance tests give us but, like most financial systems, a significant amount of our functionality is highly time dependent so we need the same kind of control over time but in a way that works even when the system is running as a whole (which means it’s running in multiple different JVMs or possibly even on different servers).

We’ve achieved that by building on the same abstracted clock as is used in unit tests but exposing it in a system-wide, distributed way. To stay as close as possible to real-world conditions we have some reduced control in acceptance tests, in particular time always progresses forward – there’s no pause button. However we do have the ability to travel forward in time so that we can test scenarios that span multiple days, weeks or even months quickly. When running acceptance tests, the system clock uses a “time travel” implementation. Initially this clock simply returns the current system time, but it also listens for special time messages on the system’s messaging bus. When one of these time messages is received, the clock calculates the difference between the time specified in the message with the current system clock time and records that. From then on when it’s asked for the time, the clock adds on that difference to the current system time. As a result, when a time message is received time immediately jumps forward to that time and then continues advancing at the same rate as the system clock.

Like all good schedulers, ours are written in a way that ensures that events fire in the correct order even if time suddenly jumps forward past the point that the event should have triggered. So receiving a time message not only jumps forward, it also triggers all the events that should have fired during the time period we skipped, allowing us to test that they did their job correctly.

The time messages are published by a time travel service which is only run in our acceptance test environment – it exposes a JMX method which our acceptance tests use to set the current system time. Each service that uses time also exposes it’s current time and the time it’s schedulers have reached via JMX so when a test time travels we can wait until the message is received by each service and all the scheduled events have finished being run.

The TARDIS

The trouble with controlling time like this is that it affects the entire system so we can’t run multiple tests at the same time or they would interfere with each other. Having to run tests sequentially significantly increases the feedback cycle. To solve this we added the TARDIS to the DSL that runs our acceptance tests. The TARDIS provides a central point of control for multiple test cases running in parallel, coordinating time travel so that the tests all move forward together, without the actual test code needing to care about any of the details or synchronisation.

The TARDIS hooks into the DSL at two points – when a test asks to time travel and when a test finishes (by either passing or failing). When a test asks to time travel, the TARDIS tracks the destination times being requested and blocks the test until all tests are either ready to time or have completed. It then time travels to the earliest requested time and wakes up any tests that requested that time point so they can continue running. Tests that requested a time point further in the future remain paused waiting for the next time travel.

Since we had a lot of time travel tests already written before we invented the TARDIS this approach allowed us to start running them in parallel without having to rewrite them – the TARDIS is simply integrated into the DSL framework we use for all tests.

Currently the TARDIS only works for tests running in the same JVM, so essentially it allows test cases to run in parallel with other cases from the same test suite, but it doesn’t allow multiple test suites on separate Romero agents to run in parallel. The next step in its evolution will be to move the TARDIS out of the test’s DSL and provide it as an API from the time travel service on the server. At that point we can run multiple test suites in parallel against the same server. However, we haven’t yet done the research to determine what, if any, benefit we’d get from that change as different test suites may have very different time travel patterns and thus spend most of their time at interim time points waiting for other tests. Also the load on servers during time travel is quite high due to the number of scheduled jobs that can fire so running multiple test suites at once may not be viable.

* Time being in sync is actually a more complex concept than it first appears. The overall architecture of our system meant this approach to time actually did provide very accurate time sources relative to our main “source of all truth”, the exchange venue itself which is what we really cared about. Even so, anything that had to be strictly “in-sync” generated a timestamp in the service that triggered it and then included in the outgoing event which is the only sane way to do such things.

April 01, 2014 04:44 AM

March 30, 2014

Anthony TownsBitcoincerns

Bitcoincerns — as in Bitcoin concerns! Get it? Hahaha.

Despite having an interest in ecash, I haven’t invested in any bitcoins. I haven’t thought about it any depth, but my intuition says I don’t really trust it. I’m not really sure why, so I thought I’d write about it to see if I could come up with some answers.

The first thing about bitcoin that bothered me when I first heard about it was the concept of burning CPU cycles for cash — ie, setup a bitcoin miner, get bitcoins, …, profit. The idea of making money by running calculations that don’t provide any benefit to anyone is actually kind of offensive IMO. That’s one of the reasons I didn’t like Microsoft’s Hashcash back in the day. I think that’s not actually correct, though, and that the calculations being run by miners are actually useful in that they ensure the validity of bitcoin transfers.

I’m not particularly bothered by the deflationary expectations people have of bitcoin. The “wild success” cases I’ve seen for bitcoin estimate their value by handy wavy arguments where you take a crazy big number, divide it by the 20M max bitcoins that are available, and end up with a crazy big number per bitcoin. Here’s the argument I’d make: someday many transactions will take place purely online using bitcoin, let’s say 75% of all transactions in the world by value. Gross World Product (GDP globally) is $40T, so 75% of that is $30T per year. With bitcoin, each coin can participate in a transaction every ten minutes, so that’s up to about 52,000 transactions a year, and there are up to 20M bitcoins. So if each bitcoin is active 100% of the time, you’d end up with a GWP of 1.04T bitcoins per year, and an exchange rate of $28 per bitcoin, growing with world GDP. If, despite accounting for 75% of all transactions, each bitcoin is only active once an hour, multiply that figure by six for $168 per bitcoin.

That assumes bitcoins are used entirely as a medium of exchange, rather than hoarded as a store of value. If bitcoins got so expensive that they can only just represent a single Vietnamese Dong, then 21,107 “satoshi” would be worth $1 USD, and a single bitcoin would be worth $4737 USD. You’d then only need 739k bitcoins each participating in a transaction once an hour to take care of 75% of the world’s transactions, with the remaining 19M bitcoins acting as a value store worth about $91B. In the grand scheme of things, that’s not really very much money. I think if you made bitcoins much more expensive than that you’d start cutting into the proportion of the world’s transactions that you can actually account for, which would start forcing you to use other cryptocurrencies for microtransactions, eg.

Ultimately, I think you’d start hitting practical limitations trying to put 75% of the world’s transactions through a single ledger (ie hitting bandwidth, storage and processing constraints), and for bitcoin, that would mean having alternate ledgers which is equivalent to alternate currencies. That would involve some tradeoffs — for bitcoin-like cryptocurrencies you’d have to account for how volatile alternative currencies are, and how amenable the blockchains are to compromise, but, provided there are trusted online exchanges to convert one cryptocurrency into another, that’s probably about it. Alternate cryptocurrencies place additional constraints on the maximum value of bitcoin itself, by reducing the maximum amount of GWP happening in bitcoin versus other currencies.

It’s not clear to me how much value bitcoin has as a value store. Compared to precious metals, is much easier to transport, much easier to access, much less expensive to store and secure. On the other hand, it’s much easier to destroy or steal. It’s currently also very volatile. As a store of value, the only things that would make it better or worse than an alternative cryptocurrency are (a) how volatile it is, (b) how easy it is to exchange for other goods (liquidity), and (c) how secure the blockchain/algorithms/etc are. Of those, volatility seems like the biggest sticking point. I don’t think it’s unrealistic to imagine wanting to store, say, $1T in cryptocurrency (rather than gold bullion, say), but with only 20M bitcoins, that would mean each bitcoin was worth at least $50,000. Given a current price of about $500, that’s a long way away — and since there are a lot of things that could happen in the meantime, I think high volatility at present is a pretty plausible outcome.

I’m not sure if it’s possible or not, but I have to wonder if a bitcoin based cryptocurrency designed to be resistant to volatility would be implementable. I’m thinking (a) a funded exchange guaranteeing a minimum exchange rate for the currency, and (b) a maximum number of coins and coin generation rate for miners that makes that exchange plausible. The exchange for, let’s call it “bitbullion”, should self-fund to some extent by selling new bitbullion at a price of 10% above guidance, and buying at a price of 10% below guidance (and adjusting guidance up or down slightly any time it buys or sells, purely in order to stay solvent).

I don’t know what the crypto underlying the bitcoin blockchain actually is. I’m surprised it’s held up long enough to get to where bitcoin already is, frankly. There’s nominally $6B worth of bitcoins out there, so it would seem like you could make a reasonable profit if you could hack the algorithm. If there were hundreds of billions or trillions of dollars worth of value stored in cryptocurrency, that would be an even greater risk: being able to steal $1B would tempt a lot of people, being able to destroy $100B, especially if you could pick your target, would tempt a bunch more.

So in any event, the economic/deflation concerns seem assailable to me. The volatility not so much, but I’m not looking to replace my bank at the moment, so that doesn’t bother me either.

I’m very skeptical about the origins of bitcoin. The fact it’s the first successful cryptocurrency, and also the first definitively non-anonymous one is pretty intriguing in my book. Previous cryptocurrencies like Chaum’s ecash focussed on allowing Alice to pay Bob $1 without there being a record of anything other than Alice is $1 poorer, and Bob is $1 richer. Bitcoin does exactly the opposite, providing nothing more than a globally verifiable record of who paid whom how much at what time. That seems like a dream come true for law enforcement — you don’t even have to get a warrant to review the transactions for an account, because everyone’s accounts are already completely public. Of course, you still have to find some way to associate a bitcoin wallet id with an actual person, but I suspect that’s a challenge with any possible cryptocurrency. I’m not quite sure what the status of the digicash/ecash patents are/were, but they were due to expire sometime around now (give or take a few years), I think.

The second thing that strikes me as odd about bitcoin is how easily it’s avoided being regulated to death. I had expected the SEC to decide that bitcoins are a commodity with no real difference to a share certificate, and that as a consequence they can only be traded using regulated exchanges by financial professionals, or similar. Even if bitcoins still count as new enough to only have gotten a knee-jerk regulatory response rather than a considered one (with at $500 a pop and significant mainstream media coverage, I doubt), I would have expected something more along the lines of “bitcoin trading is likely to come under regulation XYZ, operating or using an unregulated exchange is likely to be a crime, contact a lawyer” rather than “we’re looking into it”. That makes it seem like bitcoin has influential friends who aren’t being very vocal in public, and conspiracy theories involving NSA and CIA/FBI folks suggesting leaving bitcoin alone for now might help fight crime, seem more plausible than ones involving Gates or Soros or someone secretly creating a new financial world order.

The other aspect is that it seems like there’s only really four plausible creators of bitcoin: one or more super smart academic types, a private startup of some sort, an intelligence agency, or a criminal outfit. It seems unlikely to me that a criminal outfit would create a cryptocurrency with a strong audit trail, but I guess you never know. It seems massively unlikely that a legitimate private company would still be secret, rather than cashing out. Likewise it seems unlikely that people who’d just done it because it seemed like an interesting idea would manage to remain anonymous still; though that said, cryptogeeks are weird like that.

If it was created by an intelligence agency, then its life to date makes some sense: advertise it as anonymous online cash that’s great for illegal stuff like buying drugs and can’t be tracked, sucker in a bunch of criminals to using it, then catch them, confiscate the money, and follow the audit trail to catch more folks. If that’s only worked for silk road folks, that’s probably pretty small-time. If bitcoin was successfully marketed as “anonymous, secure cryptocurrency” to organised crime or terrorists, and that gave you another angle to attack some of those networks, you could be on to something. It doesn’t seem like it would be difficult to either break into MtGox and other trading sites to gain an initial mapping between bitcoins and real identities, or to analyse the blockchain comprehensively enough to see through most attempts at bitcoin laundering.

Not that I actually have a problem with any of that. And honestly, if secret government agencies lean on other secret government agencies in order to create an effective and efficient online currency to fight crime, that’s probable a win-win as far as I’m concerned. One concern I guess I have though, is that if you assume a bunch of law-enforcement cryptonerds build bitcoin, is that they might also have a way of “turning it off” — perhaps a real compromise in the crypto that means they can easily create forks of the blockchain and make bitcoins useless, or just enough processor power that they can break it by bruteforce, or even just some partial results in how to break bitcoin that would destroy confidence in it, and destroy the value of any bitcoins. It’d be fairly risky to know of such a flaw, and trust that it wouldn’t be uncovered by the public crypto research community, though.

All that said, if you ignore the criminal and megalomaniacal ideas for bitcoin, and assume the crypto’s sound, it’s pretty interesting. At the moment, a satoshi is worth 5/10,000ths of a cent, which would be awesome for microtransactions if the transaction fee wasn’t at 5c. Hmm, looks like dogecoin probably has the right settings for microtransactions to work. Maybe I should have another go at the pay-per-byte wireless capping I was thinking of that one time… Apart from microtransactions, some of the conditional/multiparty transaction possibilities are probably pretty interesting too.

March 30, 2014 01:00 PM

Adrian SuttonTesting@LMAX – Distributed Builds with Romero

LMAX has invested quite a lot of time into building a suite of automated tests to verify the behaviour of our exchange. While the majority of those tests are unit or integration tests that run extremely fast, in order for us to have confidence that the whole package fits together in the right way we have a lot of end-to-end acceptance tests as well.

These tests deliver a huge amount of confidence and are thus highly valuable to us, but they come at a significant cost because end-to-end tests are relatively time consuming to run. To minimise the feedback cycle we want to run these tests in parallel as much as possible.

We started out by simply creating separate groupings of tests, each of which would run in a different Jenkins job and thus run in parallel. However as the set of tests changed over time we kept having to rebalance the groups to ensure we got fast feedback. With jobs finishing at different times they would generally also pick different revisions to run against so we generally weren’t getting any revision that all the tests had run against, reducing confidence in the specific build we picked to release to production each iteration.

To solve this we’ve created custom software to run our acceptance tests which we call Romero. Romero has three parts:

At the start of a test run, the revision to test is deployed to all servers, then Romero loads all the tests for that revision and begins allocating one test to each agent and assigning that agent a server to run against. When an agent finishes running a test it reports the results back to the server and is allocated another test to run. Romero also records information about how long a test takes to run and then uses that to ensure that the longest running tests are allocated first, to prevent them “overhanging” at the end of the run while all the other agents sit around idle.

To make things run even faster, most of our acceptance test suites are able to be run in parallel, running multiple test cases at once and also sharing a single server with multiple Romero agents. Some tests however are testing functions which affect the global state of the server and can’t share servers. Romero is able to identify these types of tests and use that information when allocating agents to servers.  Servers are designated as either parallel, supporting multiple agents, or sequential, supporting only a single agent. At the start of the run Romero calculates the optimal way to allocate servers between the two groups, again using historical information about how long each test takes.

All together this gives us an acceptance test environment which is self-balancing – if we add a lot of parallel tests one iteration servers are automatically moved from sequential to parallel to minimise the run time required.

Romero also has one further trick up its sleeve to reduce feedback time – it reports failures as they happen instead of waiting until the end of the run. Often a problematic commit can be reverted before the end of the run, which is a huge reduction in feedback time – normally the next run would already have started with the bad commit still in before anyone noticed the problem, effectively doubling the time required to fix.

The final advantage of Romero is that it seamlessly handles agents dying, even in the middle of running a test and reallocates that test to another agent, giving us better resiliency and keeping the feedback cycle going even in the case of minor problems in CI. Unfortunately we haven’t yet extended this resiliency to the test servers but it’s something we would like to do.

March 30, 2014 02:48 AM

March 25, 2014

Adrian SuttonTesting@LMAX – Test Results Database

One of the things we tend to take for granted a bit at LMAX is that we store the results of our acceptance test runs in a database to make them easy to analyse later.  We not only store whether each test passed or not for each revision, but the failure message if it failed, when it ran, how long it took, what server it ran on and a bunch of other information.

Having this info somewhere that’s easy to query lets us perform some fairly advanced analysis on our test data. For example we can find tests that fail when run after 5pm New York (an important cutoff time in the world of finance) or around midnight (in various timezones). It has also allowed us to identify subtly faulty hardware based on the increased rate of test failures.

In our case we have custom software that distributes our acceptance tests across the available hardware so it records the results directly to the database, however we have also parsed JUnit reports from XML and imported into the database that way.

However you get the data, having a historical record of test results in a form that’s easy to query is a surprisingly powerful tool and worth the relatively small investment to set up.

March 25, 2014 10:54 AM

Blue HackersRude vs. Mean vs. Bullying: Defining the Differences

March 25, 2014 01:16 AM

March 23, 2014

Adrian SuttonRevert First, Ask Questions Later

The key to making continuous integration work well is to ensure that the build stays green – ensuring that the team always knows that if something doesn’t work it was their change that broke it. However, in any environment with a build pipeline beyond a simple commit build, for example integration, acceptance or deployment tests, sometimes things will break.

When that happens, there is always a natural tendency to commit an additional change that will fix it. That’s almost always a mistake.

The correct approach is to revert first and ask questions later. It doesn’t matter how small the fix might be there’s a risk that it will fail or introduce some other problem and extend the period that the tests are broken. However since we know the last build worked correctly, reverting the change is guaranteed to return things to working order. The developers working on the problematic change can then take their time to develop and test the fix then recommit everything together.

Reverting a commit isn’t a slight against its developers and doesn’t even suggest the changes being made are bad, merely that some detail hasn’t yet been completed and so it’s not yet ready to ship. Having a culture where that’s understood and accepted is an important step in delivering quality software.

March 23, 2014 09:51 AM

March 22, 2014

Anthony TownsBeanBag — Easy access to REST APIs in Python

I’ve been doing a bit of playing around with REST APIs lately, both at work and for my own amusement. One of the things that was frustrating me a bit was that actually accessing the APIs was pretty baroque — you’d have to construct urls manually with string operations, manually encode any URL parameters or POST data, then pass that to a requests call with params to specify auth and SSL validation options and possibly cookies, and then parse whatever response you get to work out if there’s an error and to get at any data. Not a great look, especially compared to XML-RPC support in python, which is what REST APIs are meant to obsolete. Compare, eg:

server = xmlrpclib.Server("http://foo/XML-RPC")
print server.some.function(1,2,3,{"foo": "bar"})

with:

base_url = "https://api.github.com/"
resp = requests.get(base_url + "/repos/django/django")
if resp.ok:
    res = resp.json()
else:
    raise Exception(r.json())

That’s not to say the python was is bad or anything — it’s certainly easier than trying to do it in shell, or with urllib2 or whatever. But I like using python because it makes the difference between pseudocode and real code small, and in this case, the xmlrpc approach is much closer to the pseudocode I’d write than the requests code.

So I had a look around to see if there were any nice libraries to make REST API access easy from the client side. Ended up getting kind of distracted by reading through various arguments that the sorts of things generally called REST APIs aren’t actually “REST” at all according to the original definition of the term, which was to describe the architecture of the web as a whole. One article that gives a reasonably brief overview is this take on REST maturity levels. Otherwise doing a search for the ridiculous acronym “HATEOAS” probably works. I did some stream-of-consciousness posts on Google-Plus as well, see here, here and here.

The end result was I wrote something myself, which I called beanbag. I even got to do it mostly on work time and release it under the GPL. I think it’s pretty cool:

github = beanbag.BeanBag("https://api.github.com")
x = github.repos.django.django()
print x["name"]

As per the README in the source, you can throw in a session object to do various sorts of authentication, including Kerberos and OAuth 1.0a. I’ve tried it with github, twitter, and xero’s public APIs with decent success. It also seems to work with Magento and some of Red Hat’s internal tools without any hassle.

March 22, 2014 08:47 AM

March 20, 2014

Blue HackersOn inclusiveness – diversity

Ashe Dryden writes about Dissent Unheard Of – noting “Perhaps the scariest part of speaking out is seeing the subtle insinuation of consequence and veiled threats by those you speak against.”

From my reading of what goes on, much of it is not even very subtle, or veiled. Death and rape threats. Just astonishingly foul. This is not how human beings should treat each other, ever. I have the greatest respect for Ashe, and her courage in speaking out and not being deterred. Rock on Ashe!

The reason I write about it here on BlueHackers.org is that I think there is a fair overlap between issues of harassment and other nasties and depression, and it will affect individuals, companies, conferences and other organisations alike. So it’s important to call it out and actually deal with it, not regard it as someone else’s problem.

Our social and work place environments need to be inclusive for all, it’s as simple as that. Inclusiveness is key to achieving diversity, and diverse environments are the most creative and innovative. If a group is not diverse, you’re missing out in many ways, personally as well as commercially.

Please read Ashe’s thoughtful analysis of what causes people and places to often not be inclusive – such understanding is a good step towards effecting change. Is there something you can do personally to improve matters in an organisation? Please tell about your thoughts, actions and experiences. Let’s talk about this!

March 20, 2014 05:37 AM

March 18, 2014

Adrian SuttonThe truth is out: money is just an IOU

The truth is out: money is just an IOU, and the banks are rolling in it:
What the Bank of England admitted this week is that none of this is really true. To quote from its own initial summary: “Rather than banks receiving deposits when households save and then lending them out, bank lending creates deposits” … “In normal times, the central bank does not fix the amount of money in circulation, nor is central bank money ‘multiplied up’ into more loans and deposits.” In other words, everything we know is not just wrong – it’s backwards. When banks make loans, they create money.
The complexity and subtlety in our financial system is delightful and frightening all at the same time. Almost everything has multiple contributing factors that are impossible to isolate and subtle shifts in world view like this can have huge implications on decision making.

March 18, 2014 11:50 PM

March 02, 2014

Ben MartinThe vs1063 untethered

Late last year I started playing with the vs1063 DSP chip, making a little Audio Player. Quite a lot of tinkering is available for in/output on such a project, so it can be an interesting way to play around with various electronics stuff without it feeling like 10 das blinken lights tutorials in a row.

Over this time I tried using a 5 way joystick for input as well as some more unconventional mechanisms. The Blackberry Trackballer from SparkFun is my favourite primary input device for the player. Most of the right hand side board is to make using that trackballer simpler, boiling it down to a nice I2C device without the need for tight timing or 4 interrupt lines on the Arduino.




The battery in the middle area of the screen is the single 16850 protected battery cell that runs the whole show. The battery leads via a switch to a 3.7v -> 5v step up to provide a clean power input. Mid way down the right middle board is a low dropout 3v3 regulator from which the bulk of the board is running. The SFE OLED character display wants its 5v, and the vs1063 breakout board, for whatever reason, wants to regulate its own 3v3 from a 5v input. Those are the two 5v holdouts on the board.

Last blog post I was still using a Uno as the primary microcontroller. I also got rid of that Uno and moved to an on board 328 running at 8Mhz and 3v3. Another interesting leaning experience was finding when something 'felt' like it needed a cap. At times the humble cap combo is a great way to get things going again after changing a little part of the board. After the last clean up it should all fit onto 3 boards again, so might then also fit back into the red box. Feels a bit sad not to have broken the red box threshold anymore :|

Loosely the board comes out somewhat like the below... there are things missing from the fritzing layout, but its a good general impression. The trackballer only needs power, gnd, I2C, and MR pin connections. With reasonable use of SMD components the whole thing should shrivel down to the size of the OLED PCB.

Without tuning the code to sleep the 328 or turn off the attin84 + OLED screen (oled is only set all black), it uses about 65mA while playing. I'm running the attiny84 and OLED from an output pin on the 4050 level shifter, so I can drop power to them completely if I want. I can expect above 40hrs continuous play without doing any power optimization. So its all going to improve from there ;)

March 02, 2014 01:33 PM

February 18, 2014

Daniel DevineSnorkelling at Perth

I was in Perth for linux.conf.au 2014 and for a place that I've heard sucks, I'm mostly seeing the opposite.

/blog_images/perth_1b.jpg

The view from near Mudurup Rocks.

Like Dirk Hohndel's talk, this post is not so much about technology as it is about the sea. I meant to write this post before I had even left Perth but I ended up getting sidetracked by LCA and general horseplay.

After Kate Chapman's enjoyable keynote I decided to blow that popsicle stand and snorkel at Cottesloe Reef (pdf) which is a FHPA (Fish Habitat Protection Area) just 30 minutes from the conference venue by bus.

Read more…

February 18, 2014 08:30 AM

February 10, 2014

Adrian SuttonInterruptions, Coding and Being a Team

Esoteric Curio – 折伏 – as it relates to coding
Our job as a team is to reinforce each other and make each other more productive. While I attempt to conquer some of the engineering tasks as well as legal tasks, and human resource tasks I face, I need concentration and it world certainly help to not be interrupted. I used to think I needed those times of required solitute pretty much all the time. It turns out that I have added far more productivity by enabling my teams than I have personally lost due to interruptions, even when it was inconvenient and frustrating. So much so that Ive learned to cherish those interruptions in the hope that, on serendipitous occasion, they turn into a swift, sprititual kick to the head. After all, sometimes all you need for a fresh persective is to turn out the light in the next room; and, of course, to not have avoided the circumstance that brought you to do it.
Too often we confuse personal productivity, or even just the impression of it, for team productivity. So often though having individuals slow down is the key to making the team as a whole speed up.  See also Go Faster By Not Working and The Myth of Measurable Productivity.

February 10, 2014 03:21 AM

February 04, 2014

Ben Martinattiny screen driver for parallel controlled character displays

This is the "details" post for controlling a 7 bit parallel OLED character display using an attiny as a display driver. See the previous post for an overview of the setup and video.

And now... the code. Apologies for some of the names of directories. Instead of branching and whatnot in git I just copied the code to library(n+1) and added the next feature in line as I went. I might at some stage do a write up detailing the stepping stones to get from the bare attiny84 to the code below.

To make it simpler, I'll call the "main" arduino the Uno. That is the one what wants to run the show. The screen is attached to the attiny84 which I'll just call the tiny84.

This is the code that one uses on the Uno: oled_clientspireal.inoIt is designed to look very close to the example it was based on (linked in the second line of its header comment).

The attiny84 runs this attiny_oled_server_spi2.inoThe heavy lifting is done in SimpleMessagePassing5 which I link next. The main part of loop() is to process each full message that we get from SimpleMessagePassing5. There is a timeout() logic in there after that which allows the tiny84 to somewhat turn off the screen after a period of no activity on the SPI bus. noDisplay() doesn't turn off the OLED internal power, so you are only down to about 6mA there.
The SimpleMessagePassing5 does two main things (coupling SPI into message byte boundaries in a single file, the academics would be unhappy) SimpleMessagePassing5.cppEarlier version of SMP did
USICR |= (1 << USIOIE); 
in init(). That line turns on the ISR for USI SPI overflow. But that is itself now done in a pin change ISR so that the USI is only active when the attiny84 has been chip selected.

serviceInput() is a fairly basic state machine, mopping up bytes until we have a whole "message" from the SPI master. If there is a complete message available then takeMessage() will return true. It is then up to the caller to do smp.buffer().get() to actually grab and disregard the bytes that comprise a message.

I'm guessing that at some stage I'd add a "Message" class that takeMessage could then return. The trick will be to do that without needing to copy from smp.buffer() so that sram is kept fairly low.
The Uno uses the shim class Shim_CharacterOLEDSPI2.cppwhich just passes the commands along to the attiny84 for execution.
This all seems to smell a bit like it wants to use something like Google protocol buffers for marshaling but the hand crafted code works :) In some ways using GPB for such a simple interface as CharacterOLEDSPI2 might be well be considered over engineering, but I'm still tempted.

February 04, 2014 12:22 PM

January 30, 2014

Blue HackersA novel look at how stories may change the brain

January 30, 2014 12:32 PM

January 29, 2014

Ben MartinRipple counting trackballer hall effects

Sparkfun sells a breakout with a blackberry trackballer on it and 4 little hall effect sensors. One complete ball rotation generates 11 hall events. So the up, down, left, and right pins will pulse 11 times a rotation. The original thought was to hook those up to DPins on the arduino and use an interrupt that covered that block of pins to count these hall events as they came in. Then in loop() one could know how many events had happened since the last check and work from there. Feeling like doing it more in hardware though, I turned to the 74HC393 which has two 4bit ripple counters in it. Since there were four lines I wanted to count I needed two 393 chips. The output (the count) from the 393 is offered in four lines per count (its a 4 bit counter). So I then took those outputs and fed them into the MCP23x17 pin muxer which has 16 in/out pins on it. I used the I2C version of the MCP chip in this case. So it then boils down to reading the chip when you like and pulsing the MR pin on the 393s to reset the counters.


In the example sketch I pushed to github, I have a small list of choices that you can scroll through and if you stop scrolling for "a while" then it selects the current entry. Which just happens to be the exact use case I am planning to put this hardware to next. Apart from feel good factor, this design should have less chance of missing events if you already have interrupt handlers which themselves might take a while to execute.

January 29, 2014 12:50 PM

January 14, 2014

Adrian SuttonHypercritical: The Road to Geekdom

Geekdom is not a club; it’s a destination, open to anyone who wants to put in the time and effort to travel there…

…dive headfirst into the things that interest you. Soak up every experience. Lose yourself in the pursuit of knowledge. When you finally come up for air, you’ll find that the long road to geekdom no longer stretches out before you. No one can deny you entry. You’re already home.

via Hypercritical: The Road to Geekdom.

January 14, 2014 10:16 AM

Adrian SuttonMyth busting mythbusted

As a follow up to the earlier link regarding the performance of animations in CSS vs JavaScript, Christian Heilmann – Myth busting mythbusted:
Jack is doing a great job arguing his point that CSS animations are not always better than JavaScript animations. The issue is that all this does is debunking a blanket statement that was flawed from the very beginning and distilled down to a sound bite. An argument like “CSS animations are better than JavaScript animations for performance” is not a technical argument. It is damage control. You need to know a lot to make a JavaScript animation perform well, and you can do a lot of damage. If you use a CSS animation the only source of error is the browser. Thus, you prevent a lot of people writing even more badly optimised code for the web.
Some good points that provide balance and perspective to the way web standards evolve and how to approach web development.

January 14, 2014 04:19 AM

January 13, 2014

Adrian SuttonMyth Busting: CSS Animations vs. JavaScript

 

Myth Busting: CSS Animations vs. JavaScript:

As someone who’s fascinated (bordering on obsessed, actually) with animation and performance, I eagerly jumped on the CSS bandwagon. I didn’t get far, though, before I started uncovering a bunch of major problems that nobody was talking about. I was shocked.

This article is meant to raise awareness about some of the more significant shortcomings of CSS-based animation so that you can avoid the headaches I encountered, and make a more informed decision about when to use JS and when to use CSS for animation.

Some really good detail on performance of animations in browsers. I hadn’t heard of GSAP previously but it looks like a good option for doing animations, especially if you need something beyond simple transitions.

January 13, 2014 09:19 PM

January 09, 2014

Adrian SuttonAre iPads and tablets bad for young children?

The Guardian: Are iPads and tablets bad for young children?

Kaufman strongly believes it is wrong to presume the same evils of tablets as televisions. “When scientists and paediatrician advocacy groups have talked about the danger of screen time for kids, they are lumping together all types of screen use. But most of the research is on TV. It seems misguided to assume that iPad apps are going to have the same effect. It all depends what you are using it for.”

It all depends what you are using it for. I can’t think of a better answer to any question about whether a technology is good or bad. Kids spending time staring at an iPad watching a movie probably isn’t giving them much benefit apart from some down time to have a break, but sitting with your child playing games or reading stories on the iPad has many great benefits.

As a parent, I sometimes find this unsettling. But I try to be mindful that it is an open question whether it is unsettling because there is something wrong with it, or because it wasn’t a feature of my own childhood.

We’re often unaware of how strongly we are biased towards the way we were brought up. People who grew up in a family with two children generally want to have two children themselves. People who grew up on a farm think its important for their kids to get experience on a farm etc. Even when you’re aware of that, it’s easy to forget it works the other way too – you may view certain activities as undesirable for your children purely because you didn’t do them in your childhood.

So what should a parent who fears their child’s proficiency on a tablet do? … “You need to acquire proficiency,” she says. “You can acquire it from them. They can teach you.”

This is probably the best advice in the entire article. Don’t be afraid of doing things with your child just because you aren’t familiar with them or confident in how to do them. Discovering new things together or having your child teach you something is one of the best ways for you both to learn and grow as people.

Finally, regarding the case of a four year old who was supposedly addicted to iPad use: 

that “case”, so eagerly taken up by the tabloids, comprised a single informal phone call with a parent, in which <the doctor> gave advice. There was no followup treatment. He doesn’t believe that “addiction” is a suitable word to use of such young children.

So don’t believe everything you hear in the media…

January 09, 2014 10:39 AM

January 03, 2014

Blue HackersBlueHackers @ linux.conf.au 2014 Perth

BlueHackers.org is an informal initiative with a focus on the wellbeing of geeks who are dealing with depression, anxiety and related matters.

This year we’re more organised than ever with a number of goodies, events and services!

- BlueHackers stickers

- BlueHackers BoF (Tuesday)

- BlueHackers funded psychologist on Thursday and Friday

- extra resources and friendly people to chat with at the conference

Details below…

This year, we’ll have a professional psychologist, Alyssa Garrett (a Perth local) funded by BlueHackers, LCA2014 and Linux Australia. Alyssa will be available Thursday and part of Friday, we’ll allocate her time in half-hour slots using a simple (paper-based) anonymous booking system. Ask a question, tell a story, take time out, find out what psychology is about… particularly if you’re wondering whether you could use some professional help, or you’ve been procrastinating taking that step, this is your chance. It won’t cost you a thing, and it’s absolutely confidential. We just offer this service because we know how important it is! There will be about 15 slots available.

You can meet Alyssa on Tuesday afternoon already, at the BoF. Just to say hi!

The booking sheet will be at the BoF and from Wednesday near the rego desk.

The BlueHackers BoF is on Tuesday afternoon, 5:40pm – 6:40pm (just before the speakers dinner). Check the BoF schedule closer to the time to see which room we’re in. The format will be similar to last year: short lightning talks of people who are happy to talk – either from their own experience, as a support, or professional. No therapy, just sharing some ideas and experience. You can prep slides if you want, but not required. Anything discussed during the BoF stays there – people have to feel comfortable.

We may have some additional paper resources available during the conference, and a friendly face for an informal chat for part of the week.

Every conference bag will have a couple of BlueHackers stickers to put on your laptop and show your quiet support for the cause – letting others know they’re not alone is a great help.

If you have any logistical or other questions, just catch me (Arjen) at the conference or write to:  l i f e (at) b l u e h a c k e r s (dot) o r g

January 03, 2014 08:08 AM

January 02, 2014

Blue HackersHow emotions are mapped in the body

January 02, 2014 10:54 PM

December 23, 2013

Blue HackersFeet up the Wall

Gina Rose of Nourished Naturally writes:

I often spend 5-30 minutes a day with my feet up the wall.
What’s going on in this pose?
Your femur bones are dropping into your hip sockets, relaxing the muscles that help you walk and support your back.
Blood is draining out of your tired feet and legs.
Your nervous system is getting a signal to slow down. Stress release and recovery time.
This position is great for sore legs, helps with digestion & circulation as well as thyroid support. If you suffer from insomnia try this before bed.

I’ve done this at times but at the time never thought through why it might be beneficial. Worth a try! And as they say, it doesn’t hurt to try – but of course it could and if it does hurt, obviously stop straight away.

December 23, 2013 11:58 PM

December 19, 2013

Daniel DevineTLDR: Getting a SSL Certificate Fingerprint in Python

For a project I am working on I need to know the SHA1 hash (the fingerprint) of a DER encoded (binary) certificate file. I found it strange that nobody had offered up a here's a hunk of code for something so simple. Any novice programmer would probably eventually arrive at this solution on their own but for the sake of those Googling, here it is.

Read more…

December 19, 2013 05:21 AM

December 17, 2013

Ben MartinThe kernel of an Arduino audio player

The vs10x3 DSP chips allow you to decode or encode various audio formats. This includes codec to mp3, ogg vorbis, as well as decoding flac and support for many other formats. So naturally I took the well trodden path and decided to use an arduino to build a "small" audio player. The breadboard version is shown below. The show is driven by the Uno on the right. The vs1063 is on a Sparkfun breakout board in the top of image. The black thing you see to the right of the vs1063 is an audio plug. The bottom IC in the line on the right is an attiny84. But wait you say, don't you already have the Uno to be the arduino in the room? But yes I say, because the attiny84 is a SPI slave device which is purely a "display driver" for the 4 bit parallel OLED screen at the bottom. Without having to play tricks overriding the reset pin, the tiny84 has just enough for SPI (4), power (2), and the OLED (7) so it's perfect for this application.

The next chip up from the attiny84 is an MCP23017 which connects to the TWI bus and provides 16 digital pins to play with. There is a SPI version of the muxer, the 23S17. The muxer works well for buttons and chip selects which are not toggled frequently. It seems either my library for the MCP is slow or using TWI for CS can slow down SPI operations where selection/deselection is in a tight loop.

Above the MCP is a 1Mbit SPI RAM chip. I'm using that as a little cache and to store the playlist for ultra quick access. There is only so much you can do with the 2kb of sram on the arduino. Above the SPI ram is a 4050 and SFE bidirectional level shifter. The three buttons in bottom left allow you to control the player reasonably effectively. Though I'm waiting for my next red box day for more controller goodies to arrive.

I've pushed some of the code to control the OLED screen from the attiny up to github, for example at:
https://github.com/monkeyiq/attiny_oled_server_spi2
I'll probably do a more just write up about those source files and the whole display driver subsystem at some stage soon.

December 17, 2013 12:47 PM

December 09, 2013

Ben MartinAsymmetric Multiprocessing at the MCU level

I recently got a 16x2 character OLED display from sparkfun. Unfortunately that display had a parallel interface so wanted to mop up at least 7 digital pins on the controlling MCU. Being a software engineer rather than an electrical engineer the problem looked like one that needed another small microcontoller to solve it. The attiny84 is a 14 pin IC which has USI SPI and, once you trim off 4 pins for SPI, 3 for power, gnd, and reset, leaves you 7 digital pins to do your bidding with. This just so happens to be the same 7 pins one needs to drive the OLED over its 4 bit parallel interface!

<iframe allowfullscreen="" frameborder="0" height="281" mozallowfullscreen="" src="http://player.vimeo.com/video/81094386" webkitallowfullscreen="" width="500"></iframe>
Driving an OLED screen over SPI using an attiny84 as a display driver from Ben Martin on Vimeo.

Shown in the video above is the attiny84 being used as a display driver by an Arduino Uno. Sorry about having the ceiling fan on during capture. On the Uno side, I have a C++ "shim" class that has the same interface as the class used to drive the OLED locally. The main difference is you have to tell the shim class which pin to use as chip select when it wants to talk to the attiny84. On the attiny commands come in over SPI and the real library that can drive the OLED screen is used to issue the commands to the screen.

The fuss about that green LED is that when it goes out, I've cut the power to the attiny84 and the OLED. Running both the later with a reasonable amount of text on screen uses about 20mA, with the screen completely dark and the tiny asleep that drops to 6mA. That can go down to less than 2mA if I turn off the internal power in the OLED. Unfortunately I haven't worked out how to turn the power back on again other than resetting the power to the OLED completely. But being able to drop power completely means that the display is an optional extra and there is no power drain if there is nothing that I want to see.

Another way to go with this is using something like an MCP23S17 chip as a pin muxer and directly control the screen from the Uno. The two downsides to that design are the need to modify the real OLED library to use the pin muxer over SPI, and that you don't gain the use of a dedicated MCU to drive the screen. An example of the later is adding a command to scroll through 10 lines of text. The Uno could issue that command and then forget about the display completely while the attiny handles the details of scrolling and updating the OLED.

Some issues of doing this were working out how to tell the tiny to go into SPI slave mode, then getting non garbage from the SPI bus when talking to the tiny, and then working out acceptable delays for key times. When you send a byte to the tiny the ISR that accepts that byte will take "some time" to complete. Even if you are using a preallocated circular array to dispense with the new byte as quickly as possible, the increment and modulo operations take time. Time that can be noticeable when the tiny is clocked at 8mhz and the Uno at 16mhz and you ramp up the SPI clock speed without mercy.

As part of this I also made a real trivial SPI calculator for the attiny84. By trivial I mean store, add, and print are the only operations and there is only a single register for the current value. But it does show that the code that interacts with the SPI bus on the client and server side gets what one would expect to get from a sane adding machine. I'll most likely be putting this code up on github once I clean it up a little bit.

December 09, 2013 03:50 AM

Paul GearonDDD on Mavericks

Despite everything else I'm trying to get done, I've been enjoying some of my time working on Gherkin. However, after merging in a new feature the other day, I caused a regression (dammit).

Until now, I've been using judiciously inserted calls to echo to trace what has been going on in Gherkin. That does work, but it can lead to bugs due to interfering with return values from functions, needs cleanup, needs the code to be modified each time something new needs to be inspected, and can result in a lot of unnecessary output before the required data shows up. Basically, once you get past a particular point, you really need to adopt a more scalable approach to debugging.

Luckily, Bash code like Gherkin can be debugged with bashdb. I'm not really familiar with bashdb, so I figured I should use DDD to drive it. Like most Gnu projects, I usually try to install them with Fink, and sure enough, DDD is available. However, installation failed, with an error due to an ambiguous overloaded + operator. It turns out that this is due to an update in the C++ compiler in OSX Mavericks. The code fix is trivial, though the patch hasn't been integrated yet.

Downloading DDD directly and running configure got stuck on finding the X11 libraries. I could have specified them manually, but I wasn't sure which ones fink likes to use, and the system has several available (some of them old). The correct one was /usr/X11R6/lib, but given the other dependencies of DDD I preferred to use Fink. However, until the Fink package gets updated then it won't compile and install on its own. So I had figured I should try to tweak Fink to apply the patch.

Increasing the verbosity level on Fink showed up a patch file that was already being applied from:
/sw/fink/dists/stable/main/finkinfo/devel/ddd.patch

It looks like Fink packages all rely on the basic package, with a patch applied in order to fit with Fink or OS X. So all it took was for me to update this file with the one-line patch that would make DDD compile. One caveat is that Fink uses package descriptions that include checksums for various files, including the patch file. My first attempt at using the new patch reported both the expected checksum and the one that was found, so that made it easy to update the .info file.

If you found yourself here while trying to get Fink to install DDD, then just use these 2 files to replace the ones in your system:

If you have any sense, you'll check that the patch file does what I say it does. :)

Note that when you update your Fink repository, then these files should get replaced.

December 09, 2013 12:34 AM


Last updated: July 28, 2014 06:00 AM. Contact Humbug Admin with problems.