HUMBUG logo


HUMBUGers

Feed: rss rdf opml

September 26, 2014

Adrian SuttonSafely Encoding Any String Into JavaScript Code Using JavaScript

When generating a JavaScript file dynamically it’s not uncommon to have to embed an arbitrary string into the resulting code so it can be operated on. For example:

function createCode(inputValue) {
return "function getValue() { return '" + inputValue + "'; }"
}

This simplistic version works great for simple strings:

createCode("Hello world!");
// Gives: function getValue() { return 'Hello world!'; }

But breaks as soon as inputValue contains a special character, e.g.

createCode("Hello 'quotes'!");
// Gives: function getValue() { return 'Hello 'quotes' !'; }

You can escape single quotes but it still breaks if the input contains a \ character. The easiest way to fully escape the string is to use JSON.stringify:

function createCode(inputValue) {
return "function getValue() { return " +
JSON.stringify(String(inputValue)) +
"; }"
}

Note that JSON.stringify even adds the quotes for us. This works because a JSON string is a JavaScript string, so if you pass a string to JSON.stringify it will return a perfectly valid JavaScript string complete with quotes that is guaranteed to evaluate back to the original string.

The one catch is that JSON.stringify will happily JavaScript objects and numbers, not just strings, so we need to force the value to be a string first – hence, String(inputValue).

 

September 26, 2014 01:21 AM

September 25, 2014

Adrian SuttonSoftware is sometimes done

In Software is sometimes done Rian van der Merwe makes the argument that we need more software that is “done”:

I do wonder what would happen if we felt the weight of responsibility a little more when we’re designing software. What if we go into a project as if the design we come up with might not only be done at some point, but might be around for 100 years or more? Would we make it fit into the web environment better, give it a timeless aesthetic, and spend more time considering the consequences of our design decisions?

It’s an interesting question – if we had to get things right the first time, would we do a better job? If our design decisions were set in stone for all time would things be better?

The problem is, we’ve already asked this question and decided that in fact designing things up front and setting it in stone doesn’t work as well as releasing early and often with short feedback cycles so that we can adjust as we go. It’s waterfall vs agile and it turns out agile wins.

That said, there’s a difference between being rushing a sloppy job out the door and doing things well with an iterative cycle to adjust to learning.  An short feedback loop is there to let you learn and improve, not to let you release any old thing and get away with it. We need more software developed by doing the best job possible with the information available, combined with a short feedback cycle to gather more information and continually raise the stakes for what’s possible.

It can be romantic to look back and think that we used to do a better job because things were more permanent, that software used to be done, but it’s just not true:

When Windows 95 came out, it was done. Yes, there were some patches to it, but they were few and far between, and in general quite difficult to come by. But of course, then the Internet and App Stores happened in full force, and suddenly we decided that “Software is never done”. In some sense this is certainly true. There are always bugs to fix, things to improve, more features to add, unused features to remove — and of course, the SaaS model makes it all so easy. But I wonder if we’ve taken this a bit too far.

Windows 95 may have a been done, but Windows was not. Otherwise we’d still be running Windows 95. We’re kidding ourselves if we think that anyone at Microsoft ever thought that Windows 95 would be the last thing they ever released, that the OS would never change in the future.

Even if we consider Windows 95 as a standalone thing that is “done”, would you run it today? Of course not, by modern standards it’s horrible. The same is true of every other piece of unmaintained software I can think of, it may have been good enough or even the best for a long time after it became unmaintained but eventually it falls behind. Eventually it stops being “done” in the sense that it doesn’t need any further work and becomes “done” in the sense that no one uses it anymore.

 

September 25, 2014 12:08 AM

September 21, 2014

Adrian SuttonCI Isn’t a To-do List

When you have automated testing setup with a big display board to provide clear feedback, it’s tempting to try and use it for things it wasn’t intended for. One example of that is as a kind of reminder system – you identify a problem somewhere that can’t be addressed immediately but needs to be handled at some point in the future. It’s tempting to add a test that begins failing after a certain date (or the inverse, whitelisting errors until a certain date). Now the build is green and there’s no risk of you forgetting to address the problem because it will fail again in the future. Perfect right?

The problem is that it sacrifices the ability to easily isolate the cause of a failure. You can no longer revert changes to get back to a working baseline because the current time is a factor in whether your tests pass or not. You can no longer run historical builds through CI which you may need to do as part of supporting clients on older versions.

It also inevitably leads to false failures as the time points are almost always arbitrary and estimated time lines for completing tasks often slip. So the board goes red and the response is to just suppress it again.

Continuous integration isn’t there to remind you to do something – it’s there to tell you if your build is ok to ship to production or not. It doesn’t make sense to say that a build is ok to ship to production now but not ok to ship in a week’s time. If production is going to blow up in the future because of a bug in the code you want to the board to be red now, not when production actually blows up. And if the client is prepared to accept the risk and leave the problem unfixed then it’s not yet a requirement and doesn’t need tests asserting it – just put a card in the backlog for the client to prioritise and play when required.

If you need to remember to do something, either put a task in the backlog or on the current story board to do it or put a reminder in your calendar. Keep CI focussed on telling you about the state of your code right now – if it’s broken it should be red, if it’s not broken it should be green. Not broken until next week should never be an option.

 

September 21, 2014 11:59 PM

September 16, 2014

Blue HackersQuitting cigarettes

Why is it having mental illness and smoking go hand in hand. I’m looking closer to getting a foot in the door with my dream job doing something I love. I also plan to quit smoking again with my fiance and two friends. It’s not going to be easy but I went seven months not smoking. I really don’t enjoy it and it costs too much. That and every cigarette takes eight minutes off my life. I’ll keep you readers of bluehackers posted about the job. I’m excited..

September 16, 2014 05:03 PM

September 14, 2014

Blue HackersMore exercise and happiness

So I managed to last one hour the.other say doing boxing related exercises. After I had cooled down I felt great. Those endorphins are so good for me. I want more but I’m not good at wanting to exercise..

September 14, 2014 07:55 PM

September 10, 2014

Blue HackersLosing weight

Today I officially have begun trying to lose weight. Medications and poor lifestyle choices have put me at very.high risk of diabetes. I also have high cholesterol. Not good. I did 30 minutes of intense boxing exercise. Thanks Ben S. He’s my best man for my wedding and a friend for roughly 8 years.

September 10, 2014 08:39 AM

Ben MartinGetting a feel for Metapolator and Cascading Parameter Values


Metapolator is a new project for working on font families. It allows you to generate many fonts in a family from a few "master" cases. For example, you could have a normal font, modify it to create a rather bold version and then use metapolator to generate 10 shades on the line between normal and bold.

I have recently been reading up on metapolator and how to hack on it. So this post describes my limited understanding of what is a fairly young project. So warnings inline with that are in place; errors are my own, the code is the final arbitrator etc.

Much of the documentation of Metapolator involves the word Meta, so I'm going to drop it off this post as seeing it all the time removes its value in terms of semantic add.

At the core of all of this polating are parameters. For example, after importing a font and calling it "normal" you might assign a value of 100 to xheight. I am assuming that many of the spline points in the glyph (skeleton) can then be defined in terms of this xheight. So the top of the 'n' might be 0.95*xheight.

A system using much the same syntax as Cascading Style Sheets is available to allow parameter values to be set or updated. Because its parameters, its called CPS instead of CSS. So you might select a glyph like 'glyph#n' and then set its xheight to be 105 instead.It seems these selectors go right down to the individual point if that's interesting.

In order to understand the CPS system I decided to start modifying a basic example and trying to get specific values back out of the CPS system. The description of this is mainly to see if my playing around was somewhat along the lines of the intended use of the CPS system.

For this I use a very basic CPS

$ cat /tmp/basic.cps
 
* {
     label : 1234;
     xx    : 5;
}

glyph#y penstroke:i(0) point:i(0) {
     xx    : 6;
}

$ metapolator dev-playground-cps /tmp/basic.cps

The existing dev-playground-cps command makes its own fonts up so all you need is a CPS file that you want to apply to those fonts. In my case I'm using two new properties, the label and 'xx' which are of type string and number respectively.

A default value of 3 is assigned to xx for all points and each point and glyph get a unique label during setup.

I found it insightful to test the below with and without selectors that modify the 'xx' property in the CPS, and at both levels. That is, changing the xx:5 and xx:6 in the above CPS to be xxno1:5 and xxno2:6 and seeing what the below printed out. The xx.value makes the most sense to me, show me the default value (3) if nothing is set in any CPS to override it or show me what the CPS has set if it did any override for the point.

element = controller.query('master#heidi glyph#y penstroke:i(0) point:i(0)')
console.log('element:', element.particulars);
console.log('element:', element.label);
console.log('element:', element.xx);
computed = controller.getComputedStyle(element)
console.log('label: ' + computed.get('label'));
console.log('xx.base   : ' + computed.getCPSValue('xx'));
console.log('xx.updated: ' + computed.getCPS('xx'));
console.log('xx.value  : ' + (computed.getCPS('xx') ? computed.getCPS('xx') : computed.getCPSValue('xx')));


The above code is also pushed to a branch of my mp fork at cps.js#L213

I found that a little tinkering in StyleDict.js was needed to get things to operate how I'd expected which is most likely because I'm using it wrong^tm.
The main thing was changing getCPSValue to test for a local entry for a parameter before using the global default StyleDict.js#L93.

I might look at adding a way to apply a CPS to a named font and showing the resulting font as pretty json. For reference this will likely have value and valuebase showing the possibly CPS updated value and the value from the original font respectively.

September 10, 2014 05:03 AM

September 09, 2014

Blue Hackers7 Things You Shouldn

http://www.huffingtonpost.com/2014/02/17/things-not-to-say-to-some_n_4781182.html

If you’ve ever suffered from severe anxiety, you’re probably overly familiar with the control it can have over your life. And you’re not alone — it affects a sizeable percentage of the population.

Learning more about anxiety and stress can be really helpful.

September 09, 2014 11:20 PM

September 07, 2014

Blue HackersAnxiety Attack felt like Heart Attack :-(

I had my first anxiety attack the other day. My lady was off picking a wedding dress. I was looking after our son whom was asleep and it just came on. I got our neighbour who luckily was home that day in his garage working on his cars. It felt like a heart attack. It stopped me from moving my right shoulder properly. I guess now I know I need help, more than a psychiatrist can help with (as they only prescribe medication if you did not know already). I’m looking into a Physiologist. I pay for private health with an awesome company called ahm. I don’t ever want to return to needing to go into hospital though. This will be a short post, but never discredit anyone who says they suffer anxiety as it’s a serious thing that causes actual physical pain. It wasn’t until the GP gave me the all clear I felt better again. Oh, and now I wear glasses as I’m short sighted from many years of looking at computer screens.

September 07, 2014 01:22 AM

September 02, 2014

Blue HackersFollow up

I have a mental illness. From consuming weed for those years. I have major depression & anxiety. I also get paranoid about germs/what people think of me/my health. I think sometimes I make things worse for myself. The best thing that has ever happened is meeting my lovely Becci. She definitely has taken my unwell self and made me well. I had long quit the weed. But recovering from heavy usage takes the brain a while. Years in fact. I have been in and out of work. Fired for having a mental illness (CBA) and more recently as in last year my mother doused herself in gasoline and set herself alight. I haven’t walked easy street. But I try to keep my head up and wits about me. I have a family to care for an my grandparents who helped raise me quite a bit. Well a lot.

September 02, 2014 01:00 PM

Blue HackersA bit about “jlg”

I’m 29, Male from Sunny Brisbane (sunny at the moment). I was born in Adelaide, SA, Australia in a hospital called Modbury hospital. It’s still there. I have one son. I also have a daughter who by law I am not legally allowed to see as I am not on the birth certificate but I’m 99.99 percent sure I’m her father. Her name for the record is Annabel. I’m unsure of spelling. Our son (mine and Bec’s) is being raised with so much love and care and I only wish the same for my daughter. I should mention I’m no street thug or criminal. I actually have no criminal record. I survive on $500AUD a fortnight currently as of right this moment. Which is not much for a overweight male. I don’t really have any vices per say but I don’t use computers so frequently at my age of 29 I have short sighted vision. I should mention I don’t have diabetes.

My story is a common one I think? Man meets woman (Steve Cullen) my bio dad. Has sex, finds out has baby and does runner. I have to this day never met my bio dad. I have seen a photo when I was younger. He was some bald dude. I don’t think much of  him and I actually don’t speak much of him. My mother was awesome and she still is, albeit after her last suicide attempt. I will get to this later. I should mention I was a heavy smoker of cannabis from 2003 to 2007.  I attended a place called HUMBUG. Ironically it was a friend I made called Daniel who got me into marijuna. He would write code, I’d hack computers. We kind of worked as a team. Because the trust was set by us consuming so much (I will call it weed). I’m not proud of my drug usage but little did I know my Mum was a heavy user of other “drugs”. She was also in the army. For roughly 6 years she taught Army service men and women about english/maths etc. As she was a UniSA educated teacher. I on the other hand am self taught. From a young age I was somewhat unwillingly writing phish attacks but for chat websites. I would call these fake logins via HTML. I did this all roughly during High School. I admit freely that the school network was a joke. That doesn’t mean I abused it. I just made sure I couldn’t use their computers by inputting ASCII characters alt+256 the invisible char into the login screen I was using. It was Novell and it would not log in if you entered this char quickly without teacher looking then you’d get moved say to a girl you liked and flirt with her…. :-o)

For the sake of keeping things realistic and true I was actually very frigid. I dated some real nice girls I just couldn’t even get any courage to do anything more than sitting near them. That obviously changed in my final few years. I have always been anti-authority because I actually had 0 parent supervision for most of my teens. I would sit infront of my IBM Aptiva listening to god awful rap music I won’t mention online. I would sit reading RFCs, reading how to write HTML then thinking outside the box and doing what’s now called XSS (aka cross site scripting). Yahoo was one I did, obviously I have never been the type of guy to go hey here’s my handle and here I am LEA track me down. I prefer doing these things without an identity I always have, always will. I am a free lance individual. Whilst I sympathize with various well known hacktivists. I do not go out of my way to engage them.

I think this is enough for now…I will update soon. It’s 10.37pm I know not that late but all this writing has exhausted me. More later.

September 02, 2014 12:37 PM

August 28, 2014

Ben MartinTerry is getting In-Terry-gence.

I had hoped to use a quad core ARM machine running ROS to spruce up Terry the robot. Performing tasks like robotic pick and place, controlling Tiny Tim and autonomous "docking". Unfortunately I found that trying to use a Kinect from an ARM based Linux machine can make for some interesting times. So I thought I'd dig at the ultra low end Intel chipset "SBC". The below is a J1900 Atom machine which can have up to 8Gb of RAM and sports the features that one expects from a contemporary desktop machine, Gb net, USB3, SATA3, and even a PCI-e expansion.


A big draw to this is the "DC" version, which takes a normal laptop style power connector instead of the much larger ATX connectors. This makes it much simpler to hookup to a battery pack for mobile use. The board runs nicely from a laptop extension battery, even if the on button is a but funky looking. On the left is a nice battery pack which is running the whole PC.

An interesting feature of this motherboard is no LED at all. I had sort of gotten used to Intel boards having blinks and power LEDs and the like.
There should be enough CPU grunt to handle the Kinect and start looking at doing DSLAM and thus autonomous navigation.

August 28, 2014 12:15 PM

August 27, 2014

Daniel DevineMailbag: Windows Developer Program for IoT

/blog_images/galileo/6.jpg

So yeah, this is not something you'd expect to see here right?

Basically, I just heard that Microsoft was giving away free Intel Galileo boards and I signed up for shits and giggles and did not really expect to be granted one.

Part of the signup process (which I think is still open) is stating what you plan to build with the board. I can't remember exactly what I wrote, but the one thing I do remember mentioning was investigating security related problems and solutions in the Internet of Things (IoT) space. A month or so later and I still find the topic interesting so that's what I'm going to do.

Read more…

August 27, 2014 07:53 AM

August 26, 2014

Blue HackersAbout your breakfast

We know that eating well (good nutritional balance) and at the right times is good for your mental as well as your physical health.

There’s some new research out on breakfast. The article I spotted (Breakfast no longer ‘most important meal of the day’ | SBS) goes a bit popular and funny on it, so I’ll phrase it independently in an attempt to get the real information out.

One of the researchers makes the point that skipping breakfast is not the same as deferring. So consider the reason, are you going to eat properly a bit later, or are you not eating at all?

When you do have breakfast, note that really most cereals contain an atrocious amount of sugar (and other carbs) that you can’t realistically burn off even with a hard day’s work. And from my own personal observation, there’s often way too much salt in there also. Check out Kellogg’s Cornflakes for a neat example of way-too-much-salt.

Basically, the research comes back to the fact that just eating is not the point, it’s what you eat that actually really does matter.

What do you have for breakfast, and at what point/time in your day?

August 26, 2014 02:00 AM

July 26, 2014

Adrian SuttonDon’t Make Your Design Responsive

Every web based product is adding “responsive design” to their feature lists. Unfortunately in many cases that responsive design is actually making their product much harder to use on a variety of screen sizes instead of easier.

The problem is that common CSS libraries and grid systems, including the extremely popular bootstrap, imply that a design can be made responsive using fixed cut-off points and the design just automatically adjusts. In reality making a design responsive requires tailoring the cut off points so that the design adjusts at the points where it stops working well.

For example, let’s look at the bootstrap documentation and in particular the top menu it uses. The bootstrap documentation gets it right, we can shrink the window down right to the point where the menus only just fit and they stick to the full size design:

Menu remains full-size for as long as it can fit.

Menu remains full-size for as long as it can fit.

If we shrink the window further the menus wouldn’t fit anymore so they correctly switch to the hamburger style:

Menu collapses once it would no longer fit.

Menu collapses once it would no longer fit.

That’s the right way to do it. The cutoff points have been specifically tailored for the content. There are other stylistic changes as well as these structural ones – the designer believes that centred text works better for headings on smaller screens for example. That’s fine, they’re fairly arbitrary design decisions based on what the designer believes looks best. I’m focussed on structural issues.

To see what happens if we ignore let’s pretend that we add a new menu item:

Our "New Item" shown correctly when the window is wide enough.

Our “New Item” shown correctly when the window is wide enough.

But now when we shrink the window down, the breakpoint is in the wrong place:

The "New Item" menu no longer fits but causes incorrect wrapping because the break point is wrong.

The “New Item” menu no longer fits but causes incorrect wrapping because the break point is wrong.

Now the design breaks as we shrink the window because the break point hasn’t been specifically tailored to the content. This is the type of error that regularly happens when people think that a responsive grid system can automatically make their site responsive. The repercussions in this case aren’t particularly bad, but it can be significantly worse.

Recently Jenkins released a rewrite of their UI moving it to using bootstrap which has unfortunately gotten responsive design completely wrong (and sadly I’m yet to see anything it’s actually improved). Browser widths that used to work perfectly well with the desktop only site are now treated as mobile browsers and content wraps into a long column. What’s worse, the most useless content, the sidebar, is what’s shown at the top with the main content pushed way down the page. At other widths the design doesn’t fit but doesn’t wrap leaving some links completely inaccessible.

It would be much better if people stopped jumping on the responsive design bandwagon and just designed their site to work for desktop browsers unless they are prepared to fully invest and do responsive design right. Mobile browsers are designed to work well with sites designed for desktop and have lots of tools and techniques for coping with them. As you add responsive design and other adjustments designed for mobile you take responsibility for making the design work well everywhere from the browser onto yourself.  Using predefined breakpoints from a CSS library is unlikely to give the result you intend. It would be nice if CSS libraries stopped claiming that it will.

July 26, 2014 11:50 AM

July 20, 2014

Adrian SuttonFrom Java to Swift

Ever since the public beta of OS X I’ve been meaning to get around to learning Objective-C but for one reason or another never found a real reason to. I’ve picked up bits and pieces of it and even written a couple of working utilities but those were pretty much entirely copy/paste from various sources. Essentially they were small enough and short-lived enough that I only needed the barest grasp of Objective-C syntax and no understanding of the core philosophies and idioms that really make a language what it is. This is probably best exemplified by the approach to memory management those utilities took: it won’t run for long, so just let it leak.

I do however have a ton of experience in Java and JavaScript plus knowledge and experience in a bunch of other languages to a wide range of extents. In other words, I’m not a complete moron, I’m just a complete moron with Objective-C.

Anyway, obviously when Swift was announced it was complete justification for my ignoring Objective-C all these years and interesting enough for me to actually get around to learning it.

So I’ve been building a very small little utility in swift so I can pull out information from OS X’s system calendar from the command line and push it around to various places that I happen to want it and can’t otherwise get it. The code is up on GitHub if you’re interested – code reviews and patches most welcome. It’s been a great little project to get used to Swift the language without spending too much time trying to learn all the OS X APIs.

Language Features

Swift has some really nice language features that make dealing with common scenarios simple and clear. Unlike many languages it doesn’t seem to go too far with that though – it doesn’t seem likely that people will abuse its features and create overly complex or overly succinct code.

My favourite feature is the built-in optional support. A variable of type String is guaranteed to not be null, a variable of type String? might be. You can’t call any methods from String on a String? variable directly you have to unwrap it first – confirming it isn’t null. That would be painful if it weren’t for the ‘if let’ construct:

let events: String? = ""
if
let events = events { events.utf16count()
}

I’ve dropped into a habit here which might be a bit overly clever – naming the unwrapped variable the same as the wrapped one. The main reason for this is that I can never think of a better name. I figure it’s much like have a if events != nil check.

APIs

Calling Swift a new language is correct but it would almost be more accurate to call it a new syntax instead. Swift does have its own core API which is unique to it, but that’s very limited. For the most part you’re actually dealing with the OS X (or iOS) APIs which are shared with Objective-C. Thus, people with experience developing in Objective-C quite obviously have a huge head start with Swift.

The other impact of sharing so many APIs with Objective-C is that some of the benefits of Swift get lost – especially around strict type checking and null reference safety. For example retrieving a list of calendar events is done via the EventKit method:

func eventsMatchingPredicate(predicate: NSPredicate!) -> [AnyObject]!

which is helpfully displayed inside Xcode using Swift syntax despite it being a pre-existing Objective-C API and almost certainly still implemented in Objective-C. However if you look at the return type you see the downside of inheriting the Objective-C APIs: the method documentation says it returns [EKEvent] but the actual declaration is [AnyObject]!  So we’ve lost both type safety and null reference safety because Objective-C doesn’t have non-nullable references or generic arrays. It’s not a massive loss because those APIs are well tested and quite stable so we’re extremely unlikely to be surprised by a null reference or unexpected type in the array, but it does require some casting in our code and requires humans to read documentation and check if something can be null rather than having the compiler do it for us.

If Swift were intended to be a language that competes with Java, python or ruby the legacy of the Objective-C APIs would be a real problem. However, Swift is designed specifically to work with those APIs, to be a relatively small but powerful step of OS X and iOS developers. In that context the legacy APIs are really just a small bump in the road that will smooth out over time.

Xcode

The other really big thing a Java developer notices switching to Swift is what a Java developer notices when switching to any other language – the tools suck. In particular, the Java IDEs are superb these days and make writing, navigating and refactoring code so much easier. Xcode doesn’t even come close. The available refactorings are quite primitive even for C and Objective-C and they aren’t supported at all for Swift.

The various project settings and preferences in Xcode are also a complete mystery – and judging from the various questions and explanations on the internet it doesn’t seem to be all that much clearer even to people with lots of experience. In reality I doubt its really much different to Java which also has a ridiculous amount of complexity in IDE settings. The big difference is that in the Java world you (hopefully) start out by learning the basics using the standard command line tools directly. Doing so gives you a good understanding of the build and runtime setup and makes it much clearer what is controlling how your software is built and what is just setting up IDE preferences. Xcode does provide a full suite of command line developer tools so hopefully I can learn more about them and get that basic understanding.

Finally, Xcode 6 beta 3 is horribly buggy. It’s a beta so I can forgive that but I’m surprised at just how bad it is even for a beta.

Cocoa Pods

This was a delight to stumble across.  Adding a dependency to a project was a daunting prospect in Xcode (jar files are surprisingly brilliant). I really don’t know what it did but it worked and I’m grateful. Libraries that aren’t available as pods are pretty much dead to me now. There does seem to be a pretty impressive array of libraries available for a wide range of tasks. Currently all of them are Objective-C libraries so you have to be able to understand Objective-C headers and examples and convert them to Swift but it’s not terribly difficult (and trivial for anyone with an Objective-C background).

Overall

Swift has a good feel about it – lots of neat features that keep code succinct. Also it’s very hard not to like strict type checking with good type inference. With Apple pushing Swift as a replacement for Objective-C over time the libraries and APIs will become more and more “Swift-like”. Xcode should improve to at least offer the basic refactorings it has for other languages and stabilise which will make it a workable IDE – exceeding the capabilities of what’s available for a lot of languages.

 

Most importantly, the vast majority of existing Objective-C developers seem to quite like it – plenty of issues raised as well, but overall generally positive.

I think the future for Swift looks bright.

July 20, 2014 11:47 AM

July 18, 2014

Ben FowlerHow to rick-roll your friends with video brochures

My employers recently distributed a promotional video to everybody in the company in the form of video brochures, as part of their big rebranding effort. My puerile sense of humour ensured that I had to have some fun with this cheap, but quit neat piece of hardware. "Video brochures" belong to a class of devices known as s1 mp3 video players, which includes most cheap Chinese MP3 audio and video

July 18, 2014 07:13 PM

July 17, 2014

Blue HackersAdverse Childhood Exprience (ACE) questionnaire | acestoohigh.com

NOTE: the links referred to in this post may contain triggers. Make sure you have appropriate support available.

http://acestoohigh.com/got-your-ace-score/

There are 10 types of childhood trauma measured in the ACE Study, personal as well as ones related to other family members. Once you have your score, there are many useful insights later in the article.

The origin of this study was actually in an obesity clinic.

July 17, 2014 08:45 AM

Blue HackersHarvard 75 year longitudinal study released re men and happy lives

July 17, 2014 01:28 AM

July 15, 2014

Daniel DevineMildly Useful jQuery Plugins

Over the last few years I have written some mildly useful jQuery plugins. I've had them published on BitBucket but I've never really announced them. Here they are.

Read more…

July 15, 2014 06:36 AM

Ben MartinHookup wires can connect themselves...

A test of precision movement of the Lynxmotion AL5D robot arm, seeing if it could pluck a hookup wire from a whiteboard and insert it into an Arduino Uno. The result: yes it certainly can! To be able to go from Fritzing layout file to automatic real world jumper setup wires would have to be inserted in a specific ordering so that the gripper could overhang the unwired part of the header as it went along.

<iframe allowfullscreen="" frameborder="0" height="281" mozallowfullscreen="" src="http://player.vimeo.com/video/100631767" webkitallowfullscreen="" width="500"></iframe>
Lynxmotion AL5D moving a jumper to an Arduino. from Ben Martin on Vimeo.

July 15, 2014 04:22 AM

July 08, 2014

Blue Hackers7 Things to Remember When You Think You’re Not Good Enough

July 08, 2014 03:01 AM

July 06, 2014

Ben MartinEnding up Small

It's easy to get swept up in trying to build a robot that has autonomy and the proximity, environment detection and inference, and feedback mechanisms that go along with that. There's something to be said about the fun of a direct drive robot with just power control and no feedback. So Tiny Tim was born! For reference, his wheels are 4 inches in diameter.


A Uno is used with an analog joystick shield to drive Tim. He is not intended and probably never will be able to move autonomously. Direct command only. Packets are sent over a wireless link from the controller (5v) to Tim (3v3). Onboard Tim is an 8Mhz/3v3 Pro Micro which I got back on Arduino day :) The motors are driven by a 1A Dual TB6612FNG Motor Driver which is operated very much like an L298 dual hbridge (2 direction pins and a PWM). Tim's Pro Micro also talks to an OLED screen and his wireless board is put out behind the battery to try to isolate it a little from the metal. The OLED screen needed 3v3 signals, so Tim became a 3v3 logic robot.

He is missing a hub mount, so the wheel on the left is just sitting on the mini gearmotor's shaft. At the other end of the channel is a Tamiya Omni Wheel which tilts the body slightly forward. I've put the battery at the back to try to make sure he doesn't flip over during hard break.

A custom PCB would remove most of the wires on Tim and be more robust. But most of Tim is currently put together from random bits that were available. The channel and beams should have a warning letting you know what can happen once you bolt a few of them together ;)

July 06, 2014 04:27 AM

July 05, 2014

Daniel DevineEncrypted Backup to Amazon S3 (2014)

The following is a guide on how to do encrypted backups to Amazon S3. This is an updated version of my older system which used the Dt-S3-Backup script but I've decided to try a replacement which is available from the EPEL repository - duply. This guide has been updated for and tested on CentOS 6.

Read more…

July 05, 2014 05:30 AM

July 01, 2014

Ben MartinThe long road to an Autonomous Vehicle.

Some people play tennis, some try to build autonomous wheeled robots. I do the later and the result has come to be known as “Terry”. In the long road toward that goal what started as a direct drive, “give the motor a PWM of X% of the total power”, lead to having encoder feedback so that the wheel could be turned an exact amount over a given time.  This has meant that the control interface no longer uses a direct power and direction slider but has buttons to perform specific motion tasks. The button with the road icon and a 1 will move the wheels forward a single rotation to advance Terry 6π inches forwards (6 inch wheels).

The red tinted buttons use the wheel encoders to provide precision movement. Enc left and Enc right turn Terry in place (one wheel forward, one wheel backwards). The Circus and Oval perform patrol like maneuvers. Circus moves forward, turns in place, returns back, and turns around again. It's like a strafing patrol. On the other hand, the Oval turns move like a regular car, with both wheels going forward but one wheel moving slower than the other to create the turning effect. The pan and tilt control the on board camera for a good Terry eye view of the world.

I added two way web socket communications to Terry this week. So now the main battery voltage is updated and the current, err, current for each motor is shown at the bottom of the page. The two white boxes there give version information so you can see if the data is stale or not. I suspect a little async javascript on some model value will be coming so that items on the page will darken as they become stale due to lag on Terry or communications loss.

I've been researching proximity detection and will be integrating a Kinect for longer distance measures soon. Having good reliable distance measurements from Terry to objects is the first step in getting him to create maps of the environment for autonomous navigation.

July 01, 2014 01:22 PM

June 26, 2014

Ben MartinAtmel Atmega1284

I started tinkering with the Atmega1284. Among other things it gives you an expansive 16kb of SRAM, 2 uarts and of course looking at the chip a bunch more IO. A huge plus is that you can get a nice small SMD version and this 40 pin DIP monster with the same 1284. Yay for breadboard prototypers who don't oven bake each board configuration! The angle of photo seems to include the interesting bits. Just ignore the two opamps on the far right :)


I had trouble getting this to work with a ceramic resonator. The two xtal lines are right next to each other with ground just above but the 3 pins on the resonator were always a bit hard to get into the right configuration for these lines. Switching over to a real crystal and 22pF caps I got things to work. The symptoms I was having with the resonator included non reproducibility, sometimes things seemed to upload sometimes not.  Also, make sure the DTR line is going though a cap to the reset pullup resistor. See the wiring just to the right of the 1284.

I haven't adapted the Arudino makefile to work on this yet, so unfortunately I still have to upload programs using the official IDE. I have the makefile compiling for the 1284 but das blinken doesn't work when I "make upload".

June 26, 2014 12:25 AM

June 20, 2014

Ben Martin3d printed part repo for attaching to Actobotics structure

I started a github repo for 3d print source files to attach things to Actobotics structure. First up is a mount to secure a rotary encoder directly to some channel as shown below.


As noted in the readme, I found a few issues post print, so had to switch to subtractive modelling (dremel time) to complete it. I leant a few things from this though so the issues are less likely to bite me in the future. There are some comments in the scad file so hopefully it will be useful to others and maybe one day I'll get a PR to update it. The README file in the repo will probably be the one true source for update info and standing issues with each scad file.

You can just see the bolt in below the rotary encoder. That has a little inset in the 3d part to stop it from free spinning when you screw in the 6-32 nut from below the channel.

June 20, 2014 12:54 AM

June 16, 2014

Ben MartinArduino and wireless networking

The nRF24L01 module allows cheap networking for Arduinos. Other than VCC which it wants at 3v3 it uses SPI (4 wires) and a few auxiliary wires, one for an interrupt line. One of the libraries to drive these chips is the RF24. Packets can be up to 32 bytes in length, and by default attempts to send less than 32 bytes result in sending a whole 32 byte block. Attempts to send more than 32 bytes seem to result in only the first 32 being sent. There is a CRC which is a 2 byte and is on by default.

For doing some IoT things using a HMAC might be much better choice than just CRC. The major advantage being that one can tell that something coming over a wireless link was from the expected source. Of course, if somebody has physical access to the arduinos then they can clone the HMAC key, but one has to define what they want from the system. The HMAC provides a fairly good assurance that sensor data is coming from the arduino you think it is coming from rather than somebody trying to send fake data. Another rather cute benefit is that the system doesn't accept junk data attacks. If your HMAC doesn't match the packet doesn't get evaluated by the higher level software.

Using a Sha256 HMAC the return ping times go from 50 to 150ms. The doubling can be explained by the network traffic, as the HMAC is 32 bytes and will double the amount of traffic. I think perhaps the extra 50ms goes into hashing but I'll have to measure that more specifically to find out.

Once I clean up the code a little I'll probably push it to github. A new RF24HMAC class delegates to the existing RF24 class and sends a HMAC packet right after the user data packet for you. The interface is similar, I'm adding some writenum() calls which I might make more like nodejs buffer. Once you have added the user data packet call done() to send everything including the HMAC packet.

RF24 radio(9,10);
RF24HMAC radiomac( radio, "wonderful key" );
radiomac.beginWrite();
radiomac.writeu32( time );
bool ok = radiomac.done();


The receiving end boils down to getting a "packet" which is really just the 32 bytes of user data that was sent. The one readAuthenticatedPacket() call actually gets 2 packets off the wire, the user data packet and the HMAC packet. If the hmac calculated locally for the user data packet does not match the HMAC that was received from the other end then you get a null pointer back from readAuthenticatedPacket(). The data is either authenticated or you don't get to see any of it.

RF24HMAC radiomac( radio, "wonderful key" );
uint8_t* packetData = 0;
if( packetData = radiomac.readAuthenticatedPacket() )
{
     int di = 0;
     uint32_t v = readu32( packetData, di );
     got_time = v;
     printf("Got payload %lu...\n\r",got_time);
}


Oh yeah, I also found this rather cute code to output hex while sniffing around.

June 16, 2014 02:06 AM

June 09, 2014

Blue HackersWorkplace Bullying

<iframe allowfullscreen="allowfullscreen" frameborder="0" height="281" src="https://www.youtube.com/embed/wAgg32weT80?feature=oembed" width="500"></iframe>

June 09, 2014 10:20 AM

June 03, 2014

Adrian SuttonSwift

Apple have released Swift, their new programming language – designed to be familiar to Objective-C programmers and work well with the existing Cocoa frameworks. It’s far too soon to make substantial judgements about the language – that can only come after actually using it in real projects for some time. However, there’s nothing that stands out as incredibly broken, so with Apple’s backing it’s extremely unlikely that it won’t become a very commonly used language. After all, there’s plenty wrong with every other programming language and we manage to make do with them.

What I find most promising about it though is that many language design choices are justified by them preventing common causes of bugs in Objective-C or C (and many other languages).  For example:

“The cases of a switch statement do not “fall through” to the next case in Swift, avoiding common C errors caused by missing break statements.”

Excerpt From: Apple Inc. “The Swift Programming Language.” iBooks

They’ve replaced many common checkstyle/lint errors with better language design that the mistakes impossible (what a good idea). It could be argued that they could have taken more extreme approaches to find solutions or prevented more sources of errors with additional cleverness but my initial take is that it has likely found a good balance between fixing common causes of bugs while still being familiar to Objective-C coders (it’s target audience) and working well with the existing frameworks and libraries.

We’ll likely find plenty to complain about, as always, but overall I suspect it will be a very nice language to work with.

June 03, 2014 08:49 AM

Ben MartinTerryTorial: Wheel Feedback, Object detection, Larger size, and now a screen!

Terry the robot now has grown up a little, both semantically and physically. The 12 inch channel that was the link to the rear swivel wheel now has many friends which are up to 15 inches higher than the base beam. This places Terry's main pan/tilt camera at his top a little bit above table height, so he is far more of a presence than he used to be. On the upside, the wheel controller, shaft encoders, and batteries can all live on the lower level and there is more room for the actual core of Terry up top. I'm likely to add one or two more shelves to the middle of Terry for more controllers etc.


There is now also a 16x32 RGB matrix screen up front so Terry can tell you what he is thinking, what speed his wheels are rotating at, and if there is an obstacle that has been detected.

I'm rendering the framebuffer for the screen on the BeagleBone Black using Cairo and Pango (thus & freetype). The image data is extracted from Cairo as 32bit RGBA and packed down into planar data that the arduino expects. That packed framebuffer is then sent over UART to an Arduino 328 which takes care of refreshing the RGB matrix so that fake colour levels are achieved using a software PWM/BAM implementation. Yay for the BBB having 4.5 UARTs. I need to work out how to bring up the TX only UART as that is perfect for the one way communication used to drive the screen.

I should be able to get better colour precision using a Teensy 3.x as the screen driver. The Cortex-M4 just has more cycles to be able to softpwm the screen faster to be able to get a greater perceived colour range.

I customized the font a little in FontForge. Specifically I modified the kerning for "Te" to not waste as much space. The font is based on Cantarell by Dave Crossland. I've only just started on this open font to LED matrix stuff, but with some form of PWM and FreeType rendering the data I hope to be able to get nice antialiased font renders, even at the low resolution of 16x32.

The wheel encoders make a huge difference to the whole experience. Even without full autonomy the encoders allow you to have a repeatable oval or patrol path which can be followed. Also being able to setup the speed to avoid large acceleration or jerky movement so that the higher Terry is still a very stable Terry when on the move.

June 03, 2014 04:50 AM

May 28, 2014

Adrian SuttonStatic and Dynamic Languages

Did you know that it’s perfectly fine to enjoy programming in both static and dynamic languages?

— Honza Pokorny (@_honza) May 26, 2014

I do a lot of coding in Java and JavaScript and it’s never bothered me that one has static types and one dynamic (and I’ve used plenty of other languages from both camps as well – it amuses me slightly that the two I use most often tend to be viewed as the worst examples of each type). I can see pros and cons of both approaches – static typing detects a bunch of common errors sooner which saves time but there are inevitably times where you wind up just fighting the type system which wastes time again. Better statically typed languages waste less of your time but at some point they all cause pain that’s avoided with dynamic languages. It winds up coming down largely to personal preference and is a little affected by what you’re building.

What I have learnt however is that using static and dynamic languages too closely is a recipe for pain and frustration. LMAX develops the main exchange in Java and recently started using Spock for unit tests. Spock is amazing in many, many ways but it comes with the Groovy programming language which is dynamically typed. There isn’t, or perhaps shouldn’t be, any tighter coupling than a class and its unit test so the combination of static and dynamic typing in this case is extremely frustrating.

It turns out that the way I work with languages differs depending on the type system – or rather on the tools available but they are largely affected by the type system. In Java I rename methods without a second thought, safe in the knowledge that the IDE will find all references and rename them. Similarly I can add a parameter to an API and then let the compiler find all the places I need to update. Once there are unit tests in Groovy however neither of those options is completely safe (and using the compiler to find errors is a complete non-starter).

When writing JavaScript I’m not under any illusion that the IDE will find every possible reference for me so I approach the problem differently. I explore using searches rather than compiler errors and turn to running tests to identify problems more quickly because the compiler isn’t there to help. I also tend to write tests different, expecting that they will need to cover things that the compiler would otherwise have picked up. The system design also changes subtly to make this all work better.

With static and dynamic typing too closely mixed, the expectations and approaches to development become muddled and it winds up being a worst-of-both-worlds approach. I can’t count on the compiler helping me out anymore but I still have to spend time making it happy.

That doesn’t mean that static and dynamic languages can’t co-exist on the same project successfully, just that they need a clearly defined, and relatively stable, interface between them. The most common example being static typing on the server and JavaScript in the browser, in which case the API acts as a buffer between the two. It could just as easily be server to server communication or a defined module API between the two though and still work.

May 28, 2014 02:48 AM

May 27, 2014

Adrian SuttonAutomated Tests Are a Code Smell

Writing automated tests to prove software works correctly is now well established and relying solely or even primarily on manual testing is considered a “very bad sign”. A comprehensive automated test suite gives us a great deal of confidence that if we break something we’ll find out before it hits production.

Despite that, automated tests shouldn’t be our first line of defence against things going wrong. Sure they’re powerful, but all they can do is point out that something is broken, they can’t do anything to prevent it being broken in the first place.

So when we write tests, we should be asking ourselves, can I prevent this problem from happening in the first place? Is there a different design which makes it impossible for this problem to happen?

For example, checkstyle has support for import control, allowing you to write assertions about what different packages can depend on. So package A can use package B but package C can’t. If you’re concerned about package structure it makes a fair bit of sense. Except that it’s a form of testing and the feedback comes late in the cycle. Much better would be to split the code into separate source trees so that the restrictions are made explicit to the compiler and IDE. That way autocomplete won’t offer suggestions from forbidden packages and the code won’t compile if you use them. It is therefore much harder to do the wrong thing and feedback comes much sooner.

Each time we write an automated test, we’re admitting that there is a reasonable likelihood that someone could mistakenly break it. In the majority of cases an automated test is the best we can do, but we should be on the look out for opportunities replace automated tests with algorithms, designs or tools that eliminate those mistakes in the first place or at least identify them earlier.

 

May 27, 2014 03:09 AM

May 26, 2014

Ben MartinTo beep or not to beep.

There comes a time in the life of any semi autonomous robot when the study of the classics occurs. For Terry, the BeagleBone Black driven Actobotics construction, short of visiting the Globe the answer would seem to be at current "To Beep".



Terry is now run by a RoboClaw 2x5A controller board with two 1024 pulse per revolution shaft encoders attached to an 8:1 ratio gearing on the wheel. This gives an overall of about 13 bits of precision from the shaft encoders relative to the wheel. In the future I can use this magnificent 4 inch alloy gear wheel as a torque multiplier by moving the motor to its own tooth pinion gear. Terry now also has an IR distance sensor out front, and given the 512mb of RAM on the BeagleBone some fairly interesting mapping can be built up by just driving around and storing readings. Distance + accurate shaft encoders for the win?

To use the RoboClaw from the BeagleBone Black I created a little nodejs class. It's not fully complete yet, but it is already somewhat useful. I found creating the nodejs interesting because when coding for MCUs one gets quite used to the low level style of sending bytes and stalling until the reply comes in. For nodejs such a style doesn't implement well. Normally one would see something like

doMagic( 7, rabbits, function() {
    console.log("got them rabbitses, we did");
});

Where the callback function is performed when the task of pulling the number of rabbits from the hat is complete. For using the RoboClaw you might have something like

claw.getVersion( function( v ) {
  console.log("version:" + v.value );
});
claw.setQPPS( 1024 );

Which all seems fairly innocent. The call to getVersion will send the VERSION command to the roboclaw with a given address (you can have many claws on the same bus). The claw should then give back a version string and a checksum byte. The checksum can be stripped off and verified by the nodejs.

The trouble is that the second call, to set of the quadratic decoder pulses per second, will start to happen before the RoboClaw got a chance to service the getVersion call. Trying to write a second command before you have the full reply from the first is a recipe for disaster. So the nodejs RoboClaw class maintains a queue of requests. The setQPPS() is not run right away, but enqueued for later. Once the bytes come back over the UART in response to the getVersion() then the callback is run and the RoboClaw nodejs class then picks the next command (setQPPS) and sends that to the RoboClaw. This way you get the strict ordered serial IO that the RoboClaw hardware is needing, but the nodejs can execute many commands without any stalling. Each command is handled in the order it is written in the nodejs source code, it just doesn't execute right away.

Next up is trying to tune the PID controller variables for the robot. One might expect a command such as "move 1 wheel circumference forward" to work fairly exactly if the quality of the shaft encoders is good. Though for the wheel size and QPPS I get a little bit of overshoot currently. I suspect the current D is not sufficient to pull it up in time. It may be a non issue when the wheels are down and friction helps the slow down process.

Terry now also has an IMU onboard. I created a small class to read from the TWI HMC5883 magneto. You'll have to figure your declination and I have a small hack on about line 100 to alter the heading so that a 0 degree reading means that physically Terry is facing North. I should bring that parameter out of the class so that its easier to adjust for other robots that might mount their IMU in a different physically orientation than what I happened to use.

May 26, 2014 08:07 AM

May 25, 2014

Adrian SuttonFinding Balance with the SOLID Principles

Kevlin Henney’s talk from YOW is a great deconstruction of the SOLID principles, delving into what they really say as opposed to what people think they say, and what we can learn from them in a nuanced, balanced way.

Far too often we take these rules as absolutes under the mistaken assumption that if we always follow them, our software will be more maintainable. As an industry we seem to be seeking a silver bullet for architecture that solves all our problems just like we’ve searched for a silver bullet language for so long. There is value in these principles and we should take the time to learn from them, but they are just tools in our toolkit that we need to learn not only how to use, but when.

May 25, 2014 10:09 PM

May 24, 2014

Adrian SuttonDon’t Get Around Much Anymore

Human Resource Executive Online has an interesting story related to my previous entry:

Ask yourself this question: If you were offered a new job in another city where you have no ties or networks, and you suspected that the job would probably not last more than three years (which is a good guess), how much of a raise would they have to give you to get you to move? Remember, you’d likely be trying to find a new job in a place where you don’t have any connections when that job ended. I suspect it would be a lot more than most employers are willing to pay.

Again though this should be driving up the number of remote jobs, but where are they all?

May 24, 2014 08:14 AM

Adrian SuttonWhere Are All The Remote Jobs?

Photo by Citrix Online CC BY-NC-ND 2.0

Workshifting by Citrix Online CC BY-NC-ND 2.0

It seems like forever that people have been arguing that remote working is the future. Why limit yourself to just the people who live near your office when hiring? Why suffer the distractions of an office when you could define your own environment in your very own home? Why waste employee time commuting? Surely by now every employee would want to be working from wherever they choose and employers would be forced to accept it, if not actively encourage it to save on office costs and get the best employees.

Yet somehow despite all the hype and promise there seem to be exceedingly few jobs around that allow for remote work. The fastest way I know of to ensure a recruiter never contacts me again is to simply mention that I’m based in Brisbane Australia and have no interest in moving. That greatly surprises me – especially in the technology industry.

When I do see jobs that support remote work it’s almost always for companies that primarily work on open source code or companies that primarily hire employees based on either their involvement in the company’s community or their other open source contributions.

So perhaps one of the big reasons employers aren’t adopting remote work as much is the difficulty in hiring the right people, training them and integrating them into the company and evaluating if they’re actually working out. Remote working involves a significant amount of trust between employer and employee. New employees generally haven’t had a chance to build up that trust unless they were already contributing to the companies community or demonstrating their contributions in open source. Unfortunately looking at open source contributions excludes any developer that enjoys their job so much they spend all their spare coding time on that and don’t currently work in a job that involves a lot of open source.

That fits with my current situation – for the last two and a bit years I’ve had the luxury of working from home, but it only happened because I originally worked in the London office and then resigned to move home. Fortunately they were keen to keep me so I wound up staying on and working from home. Later a second developer went through the same process and now works from New Zealand.

Once trust and culture is established we seem to have plenty of tools to make remote working extremely effective. If remote working is really the future though, we’ll need to find better ways to establish that initial trust and on-board new employees.

Are there hordes of remote working jobs out there that I just don’t hear about? If not, what’s stopping it from taking off?

May 24, 2014 04:38 AM

May 13, 2014

Blue HackersKids Matter – mental health for schoolkids

The KidsMatter site is funded by the Australian Commonwealth Department of Health. There is a specific section for primary schools, KidsMatter Primary.

Seems like an excellent initiative that’s been going for some years already, but not every school will be involved with it yet (cost shouldn’t be a hindrance, it’s free).

So please take a look and mention it to your contacts (principal, P&C, teacher) at your kids’ school!

May 13, 2014 10:34 AM

May 12, 2014

Adrian SuttonCombining Output of Multiple Bash Commands Into One Line

I wanted to create a CSV file showing the number of JUnit tests in our codebase vs the number of Spock tests over time. I can count the number of tests, along with the revision pretty easily with:

git svn find-rev HEAD
find src/test -name '*Test.java' | wc -l
find src/test -name '*Spec.groovy' | wc -l

But that outputs the results on three separate lines and I really want them on one line with some separator (comma, space or tab). The simplest way to achieve that is:

echo "$(git svn find-rev HEAD), $(find src/test -name '*Test.java' | wc -l), $(find src/test -name '*Spec.groovy' | wc -l)"

It’s also very handy to remember that echo has a -n option which omits the trailing new line, so:

echo -n 'Revision: ' && git svn find-rev HEAD

outputs:

Revision: 55450

May 12, 2014 11:55 PM

May 10, 2014

Ben MartinGoing from Arduino TWI to BeagleBone Black Bonescript

It's handy for a robot to know what orientation it is holding, and for that a Magnetometer can give you where north is at relative to the chip orientation. For this I used the HMC5883. I started out with a cut down minimal version of code running on Arduino and then started porting that over to Bonescript for the BeagleBone Black to execute.

The HMC5883 only needs power (3v3), ground, and the two wires of a TWI.

One of the larger API changes of the port is the Arduino line:

Wire.requestFrom( HMC5883_ADDRESS_MAG, 6 );

Which takes the TWI address of the chip you want 6 bytes from. It has to be prefixed by another TWI transmission to set the read byte address to start from where you would like, in this case HMC5883_REGISTER_MAG_OUT_X_H_M = 0x03.

On the Bonescript side this becomes the following. The TWI address isn't used because the wire object already knows that, but you pass the register address to start reading from directly to the request to read bytes:


self.wire.readBytes( HMC5883_REGISTER_MAG_OUT_X_H_M,  
BUFFER_SIZE, 
function(err, res) { ... });

I pushed the code up to github bbbMagnetometerHMC5883. Note that getting "north" requires both a proper declination angle for your location and also some modification to allow for how your magneto chip is physically oriented on your robot. Actual use of the chip is really easy with the async style of nodejs.

May 10, 2014 11:52 PM

May 07, 2014

Adrian SuttonThe Single Responsibility Principle (and what’s wrong with it)

Marco Cecconi - I don’t love the single responsibility principle:
The principle is arbitrary in itself. What makes one and only Reason To Change™ always, unequivocally better than two Reasons To Change™? The number one sounds great, but I’m a strong advocate of simple things and sometimes a class with more reasons to change is the simplest thing. I agree that we mustn’t do mega-classes that try to do too many different things, but why should a class have one single Reason To Change™? I flatly disagree with the statement. It’s not supported by any fact. I could say “three” reasons and be just as arbitrary.
Interesting critique of the single responsibility pattern. The quote above is perhaps one of the weaker arguments against the single responsibility pattern but it struck a nerve with me because all too often we assume that because a rule sounds good it has some validity. In reality, picking one responsibility for a class is quite arbitrary and, as the article argues, deciding what a responsibility is is even more arbitrary.
So, whilst I agree completely on the premise of not make ginormous classes, I think this principle is not at all helpful in either illustrating the concept or even identifying unequivocally problematic cases so they can be corrected. In the same way, anemic micro-classes that do little are a very complicated way of organizing a code base.
Given I’ve spent time today fixing a bug that was almost entirely caused by having too many classes with too focused responsibilities which caused the combined structure to be difficult to reason about and have unintended consequences this resonates strongly as well. Making responsibilities too fine grained is just as detrimental to the maintainability of code and somewhat counterintuitively also to it’s reusability. Reusing a single class is far simpler than having to work out how to piece together ten separate classes in the right way (hence the existence of builders which create a facade that everything is really a single unit again). I can see a lot of value in discussing things in terms of coupling and cohesion rather than responsibilities and using those concepts to find the right balance.

May 07, 2014 04:25 AM

April 29, 2014

Ben MartinActobotics and ServoCity, addictive Robot fun!

I have had the great fortune to play around with some of the Actobotics components. The main downside I've seen so far is that its highly addictive!


What started out as a 3 wheel robot base then obtained a Web interface and an on board access point to control the speed and steering of the beast. The access point runs openWRT (now) and so with the Beagle there are two Linux machines on board!

The pan and tilt is also controlled via the Web interface and SVGA video streams over the access point from the top mounted web camera. There is a Lithium battery mounted in channel below the BeagleBone Black which powers it, the camera, and the TPLink access point. That block of 8 AA rechargables is currently tasked with turning the motors and servo.

It's early days yet for the robot. With a gyro I can make the pan gearmotor software limited so it doesn't wrap the cables around by turning multiple revolutions. Some feedback from the wheel gearmotors, location awareness and ultrasound etc will start to make commands like "go to the kitchen" more of a possibility.

April 29, 2014 12:52 PM

April 21, 2014

Clinton RoyPyCon Australia Call for Proposals closes Friday 25th April

There’s less than a week left to get your proposal in for PyCon Australia 2014, Australia’s national Python Conference. We focus on first time speakers so please get in touch if you have any questions. The full details are available at http://2014.pycon-au.org/cfp

 

 

 


Filed under: Uncategorized Tagged: pyconau

April 21, 2014 09:21 AM

April 17, 2014

Ben FowlerBad HDMI cables and CEC

Fig: nasty HDMI cable from Maplin that apparently doesn't support CEC I got myself a Raspberry Pi (a tiny, £30 hacker-friendly computer), and then set it up with RaspXBMC to stream my movie rips off the house file server to my Samsung 6-Series TV. Now, you're supposed to be able to use the TV remote to drive XBMC and it didn't work initially. No matter how hard I tried, I simply could not

April 17, 2014 09:26 PM

April 14, 2014

Adrian SuttonTesting@LMAX – Testing in Live

Previously in the Testing@LMAX series I’ve mentioned the way we’ve provided isolation between tests, allowing us to run them in parallel. That isolation extends all the way up to supporting a multi-tenancy module called venues which allows us to essentially run multiple, functionally separate exchanges on a single deployment of the LMAX Exchange.

We use the isolation of venues to reduce the amount of hardware we need to run our three separate liquidity pools (LMAX Professional, LMAX Institutional and LMAX Interbank), but that’s not all. We actually use the isolation venues provide to extend our testing all the way into production.

We have a subset of our acceptance tests which, using venues, are run against the exchange as it is deployed in production, using the same APIs our clients, MTF members and internal staff would use to test that the exchange is fully functional. We have an additional venue on each deployment of the exchange that is used to run these tests. The tests connect to the exchange via the same gateways as our clients (FIX, web, etc) and place real trades that match using the exact same systems and code paths as in the “real” venues. Code-wise there’s nothing special about the test venue, it just so happens that the only external parties that ever connect to it are our testing framework.

We don’t run our full suite of acceptance tests against the live exchange due to the time that would take and to ensure that we don’t affect the performance or latency of the exchange. Plus, we already know the code works correctly because it’s already run through continuous integration. Testing in live is focussed on verifying that the various components of the exchange are hooked up correctly and that the deployment process worked correctly. As such we’ve selected a subset of our tests that exercise the key functions of each of the services that make up the exchange. This includes things like testing that an MTF member can connect and provide prices, that clients can connect via either FIX or web and place orders that match against those prices and that the activity in the exchange is reported out correctly via trade reporting and market data feeds.

We run testing in live as an automated step at the start of our release, prior to making any changes, and again at the end of the release to ensure the release worked properly. If testing in live fails we roll back the release. We also run it automatically throughout the day as one part of our monitoring system, and it is also run manually whenever manual work is done or whenever there is any concern for how the exchange is functioning.

While we have quite a lot of other monitoring systems, the ability to run active monitoring like this against the production exchange, and go as far as actions that change state gives us a significant boost in confidence that everything is working as it should, and helps isolate problems more quickly when things aren’t.

April 14, 2014 12:36 PM

April 12, 2014

Adrian SuttonTesting@LMAX – Test Isolation

One of the most common reasons people avoid writing end-to-end acceptance tests is how difficult it is to make them run fast. Primary amongst this is the time required to start up the entire service and shut it down again. At LMAX with the full exchange consisting of a large number of different services, multiple databases and other components start up is far too slow to be done for each test so our acceptance tests are designed to run against the same server instance and not interfere with each other.

Most functions in the exchange can be isolated by simply creating one or more accounts and instruments that are unique to the particular test. The test simply starts by using the usual administration APIs to create the users and instruments it needs. Just this basic level of isolation allows us to test a huge amount of the exchange’s functionality – all of the matching behaviour for example. With the tests completely isolated from each other we can run them in parallel against the same server and dramatically reduce the amount of hardware required to run all the acceptance tests in a reasonable time.

But there are many functions in an exchange that cut across instruments and accounts – for example the exchange rates used to convert the profit or loss a user earns in one currency back to their account base currency. Initially tests that needed to control exchange rates could only be run sequentially, each one taking over the entire exchange while they ran and significantly increasing the time required for the test run. More recently however we’ve made the concept of currency completely generic – tests now simply create a unique currencies they can use and are able to set the exchange rates between those currencies without affecting any other tests. This makes our acceptance tests run significantly faster, but also means new currencies can be supported in the exchange without any code changes – just use the administration UI to create the desired currency.

We’ve applied the same approach of creating a completely generic solution even when there is a known set of values in a range of other areas, giving us better test isolation and often making it easier to respond to unexpected future requirements. Sometimes this adds complexity to the code or administration options which could have been avoided but the increased testability is well worth it.

The ultimate level of test isolation however is our support for multiple venues running in a single instance of the exchange. This essentially moves the exchange to a multi-tenancy model, a venue encapsulates all aspects of an exchange, allowing us to test back office reports that track how money moves around the exchange, reconciliation reports that cover all trades and many other functions that report on the state of the exchange as a whole.

With the LMAX Exchange now essentially running in three forms (Professional, Institutional and Interbank) this support for venues is more than just an optimisation for tests – we can co-host different instances of the exchange on the same production hardware, reducing not only the upfront investment required but also the ongoing maintenance costs.

Overall we’ve seen that making features more easily testable (using end-to-end acceptance tests) surprisingly often delivers business benefit making the investment well worth it.

April 12, 2014 12:36 PM

April 06, 2014

Blue HackersShift workers beware: Sleep loss may cause brain damage, new research says

April 06, 2014 08:48 AM

April 01, 2014

Blue HackersStudents and Mental Health at University

The Guardian is collecting experiences from students regarding mental health at university. I must have missed this item earlier as there are only a few days left now to get your contribution in. Please take a look and put in your thoughts!

It’s always excellent to see mental health discussed. It helps us and society as a whole.

April 01, 2014 11:34 PM

Adrian SuttonTesting@LMAX – Time Travel and the TARDIS

Testing time related functions is always a challenge – generally it involves adding some form of abstraction over the system clock which can then be stubbed, mocked or otherwise controlled by unit tests in order to test the functionality. At LMAX we like the confidence that end-to-end acceptance tests give us but, like most financial systems, a significant amount of our functionality is highly time dependent so we need the same kind of control over time but in a way that works even when the system is running as a whole (which means it’s running in multiple different JVMs or possibly even on different servers).

We’ve achieved that by building on the same abstracted clock as is used in unit tests but exposing it in a system-wide, distributed way. To stay as close as possible to real-world conditions we have some reduced control in acceptance tests, in particular time always progresses forward – there’s no pause button. However we do have the ability to travel forward in time so that we can test scenarios that span multiple days, weeks or even months quickly. When running acceptance tests, the system clock uses a “time travel” implementation. Initially this clock simply returns the current system time, but it also listens for special time messages on the system’s messaging bus. When one of these time messages is received, the clock calculates the difference between the time specified in the message with the current system clock time and records that. From then on when it’s asked for the time, the clock adds on that difference to the current system time. As a result, when a time message is received time immediately jumps forward to that time and then continues advancing at the same rate as the system clock.

Like all good schedulers, ours are written in a way that ensures that events fire in the correct order even if time suddenly jumps forward past the point that the event should have triggered. So receiving a time message not only jumps forward, it also triggers all the events that should have fired during the time period we skipped, allowing us to test that they did their job correctly.

The time messages are published by a time travel service which is only run in our acceptance test environment – it exposes a JMX method which our acceptance tests use to set the current system time. Each service that uses time also exposes it’s current time and the time it’s schedulers have reached via JMX so when a test time travels we can wait until the message is received by each service and all the scheduled events have finished being run.

The TARDIS

The trouble with controlling time like this is that it affects the entire system so we can’t run multiple tests at the same time or they would interfere with each other. Having to run tests sequentially significantly increases the feedback cycle. To solve this we added the TARDIS to the DSL that runs our acceptance tests. The TARDIS provides a central point of control for multiple test cases running in parallel, coordinating time travel so that the tests all move forward together, without the actual test code needing to care about any of the details or synchronisation.

The TARDIS hooks into the DSL at two points – when a test asks to time travel and when a test finishes (by either passing or failing). When a test asks to time travel, the TARDIS tracks the destination times being requested and blocks the test until all tests are either ready to time or have completed. It then time travels to the earliest requested time and wakes up any tests that requested that time point so they can continue running. Tests that requested a time point further in the future remain paused waiting for the next time travel.

Since we had a lot of time travel tests already written before we invented the TARDIS this approach allowed us to start running them in parallel without having to rewrite them – the TARDIS is simply integrated into the DSL framework we use for all tests.

Currently the TARDIS only works for tests running in the same JVM, so essentially it allows test cases to run in parallel with other cases from the same test suite, but it doesn’t allow multiple test suites on separate Romero agents to run in parallel. The next step in its evolution will be to move the TARDIS out of the test’s DSL and provide it as an API from the time travel service on the server. At that point we can run multiple test suites in parallel against the same server. However, we haven’t yet done the research to determine what, if any, benefit we’d get from that change as different test suites may have very different time travel patterns and thus spend most of their time at interim time points waiting for other tests. Also the load on servers during time travel is quite high due to the number of scheduled jobs that can fire so running multiple test suites at once may not be viable.

* Time being in sync is actually a more complex concept than it first appears. The overall architecture of our system meant this approach to time actually did provide very accurate time sources relative to our main “source of all truth”, the exchange venue itself which is what we really cared about. Even so, anything that had to be strictly “in-sync” generated a timestamp in the service that triggered it and then included in the outgoing event which is the only sane way to do such things.

April 01, 2014 04:44 AM

March 30, 2014

Anthony TownsBitcoincerns

Bitcoincerns — as in Bitcoin concerns! Get it? Hahaha.

Despite having an interest in ecash, I haven’t invested in any bitcoins. I haven’t thought about it any depth, but my intuition says I don’t really trust it. I’m not really sure why, so I thought I’d write about it to see if I could come up with some answers.

The first thing about bitcoin that bothered me when I first heard about it was the concept of burning CPU cycles for cash — ie, setup a bitcoin miner, get bitcoins, …, profit. The idea of making money by running calculations that don’t provide any benefit to anyone is actually kind of offensive IMO. That’s one of the reasons I didn’t like Microsoft’s Hashcash back in the day. I think that’s not actually correct, though, and that the calculations being run by miners are actually useful in that they ensure the validity of bitcoin transfers.

I’m not particularly bothered by the deflationary expectations people have of bitcoin. The “wild success” cases I’ve seen for bitcoin estimate their value by handy wavy arguments where you take a crazy big number, divide it by the 20M max bitcoins that are available, and end up with a crazy big number per bitcoin. Here’s the argument I’d make: someday many transactions will take place purely online using bitcoin, let’s say 75% of all transactions in the world by value. Gross World Product (GDP globally) is $40T, so 75% of that is $30T per year. With bitcoin, each coin can participate in a transaction every ten minutes, so that’s up to about 52,000 transactions a year, and there are up to 20M bitcoins. So if each bitcoin is active 100% of the time, you’d end up with a GWP of 1.04T bitcoins per year, and an exchange rate of $28 per bitcoin, growing with world GDP. If, despite accounting for 75% of all transactions, each bitcoin is only active once an hour, multiply that figure by six for $168 per bitcoin.

That assumes bitcoins are used entirely as a medium of exchange, rather than hoarded as a store of value. If bitcoins got so expensive that they can only just represent a single Vietnamese Dong, then 21,107 “satoshi” would be worth $1 USD, and a single bitcoin would be worth $4737 USD. You’d then only need 739k bitcoins each participating in a transaction once an hour to take care of 75% of the world’s transactions, with the remaining 19M bitcoins acting as a value store worth about $91B. In the grand scheme of things, that’s not really very much money. I think if you made bitcoins much more expensive than that you’d start cutting into the proportion of the world’s transactions that you can actually account for, which would start forcing you to use other cryptocurrencies for microtransactions, eg.

Ultimately, I think you’d start hitting practical limitations trying to put 75% of the world’s transactions through a single ledger (ie hitting bandwidth, storage and processing constraints), and for bitcoin, that would mean having alternate ledgers which is equivalent to alternate currencies. That would involve some tradeoffs — for bitcoin-like cryptocurrencies you’d have to account for how volatile alternative currencies are, and how amenable the blockchains are to compromise, but, provided there are trusted online exchanges to convert one cryptocurrency into another, that’s probably about it. Alternate cryptocurrencies place additional constraints on the maximum value of bitcoin itself, by reducing the maximum amount of GWP happening in bitcoin versus other currencies.

It’s not clear to me how much value bitcoin has as a value store. Compared to precious metals, is much easier to transport, much easier to access, much less expensive to store and secure. On the other hand, it’s much easier to destroy or steal. It’s currently also very volatile. As a store of value, the only things that would make it better or worse than an alternative cryptocurrency are (a) how volatile it is, (b) how easy it is to exchange for other goods (liquidity), and (c) how secure the blockchain/algorithms/etc are. Of those, volatility seems like the biggest sticking point. I don’t think it’s unrealistic to imagine wanting to store, say, $1T in cryptocurrency (rather than gold bullion, say), but with only 20M bitcoins, that would mean each bitcoin was worth at least $50,000. Given a current price of about $500, that’s a long way away — and since there are a lot of things that could happen in the meantime, I think high volatility at present is a pretty plausible outcome.

I’m not sure if it’s possible or not, but I have to wonder if a bitcoin based cryptocurrency designed to be resistant to volatility would be implementable. I’m thinking (a) a funded exchange guaranteeing a minimum exchange rate for the currency, and (b) a maximum number of coins and coin generation rate for miners that makes that exchange plausible. The exchange for, let’s call it “bitbullion”, should self-fund to some extent by selling new bitbullion at a price of 10% above guidance, and buying at a price of 10% below guidance (and adjusting guidance up or down slightly any time it buys or sells, purely in order to stay solvent).

I don’t know what the crypto underlying the bitcoin blockchain actually is. I’m surprised it’s held up long enough to get to where bitcoin already is, frankly. There’s nominally $6B worth of bitcoins out there, so it would seem like you could make a reasonable profit if you could hack the algorithm. If there were hundreds of billions or trillions of dollars worth of value stored in cryptocurrency, that would be an even greater risk: being able to steal $1B would tempt a lot of people, being able to destroy $100B, especially if you could pick your target, would tempt a bunch more.

So in any event, the economic/deflation concerns seem assailable to me. The volatility not so much, but I’m not looking to replace my bank at the moment, so that doesn’t bother me either.

I’m very skeptical about the origins of bitcoin. The fact it’s the first successful cryptocurrency, and also the first definitively non-anonymous one is pretty intriguing in my book. Previous cryptocurrencies like Chaum’s ecash focussed on allowing Alice to pay Bob $1 without there being a record of anything other than Alice is $1 poorer, and Bob is $1 richer. Bitcoin does exactly the opposite, providing nothing more than a globally verifiable record of who paid whom how much at what time. That seems like a dream come true for law enforcement — you don’t even have to get a warrant to review the transactions for an account, because everyone’s accounts are already completely public. Of course, you still have to find some way to associate a bitcoin wallet id with an actual person, but I suspect that’s a challenge with any possible cryptocurrency. I’m not quite sure what the status of the digicash/ecash patents are/were, but they were due to expire sometime around now (give or take a few years), I think.

The second thing that strikes me as odd about bitcoin is how easily it’s avoided being regulated to death. I had expected the SEC to decide that bitcoins are a commodity with no real difference to a share certificate, and that as a consequence they can only be traded using regulated exchanges by financial professionals, or similar. Even if bitcoins still count as new enough to only have gotten a knee-jerk regulatory response rather than a considered one (with at $500 a pop and significant mainstream media coverage, I doubt), I would have expected something more along the lines of “bitcoin trading is likely to come under regulation XYZ, operating or using an unregulated exchange is likely to be a crime, contact a lawyer” rather than “we’re looking into it”. That makes it seem like bitcoin has influential friends who aren’t being very vocal in public, and conspiracy theories involving NSA and CIA/FBI folks suggesting leaving bitcoin alone for now might help fight crime, seem more plausible than ones involving Gates or Soros or someone secretly creating a new financial world order.

The other aspect is that it seems like there’s only really four plausible creators of bitcoin: one or more super smart academic types, a private startup of some sort, an intelligence agency, or a criminal outfit. It seems unlikely to me that a criminal outfit would create a cryptocurrency with a strong audit trail, but I guess you never know. It seems massively unlikely that a legitimate private company would still be secret, rather than cashing out. Likewise it seems unlikely that people who’d just done it because it seemed like an interesting idea would manage to remain anonymous still; though that said, cryptogeeks are weird like that.

If it was created by an intelligence agency, then its life to date makes some sense: advertise it as anonymous online cash that’s great for illegal stuff like buying drugs and can’t be tracked, sucker in a bunch of criminals to using it, then catch them, confiscate the money, and follow the audit trail to catch more folks. If that’s only worked for silk road folks, that’s probably pretty small-time. If bitcoin was successfully marketed as “anonymous, secure cryptocurrency” to organised crime or terrorists, and that gave you another angle to attack some of those networks, you could be on to something. It doesn’t seem like it would be difficult to either break into MtGox and other trading sites to gain an initial mapping between bitcoins and real identities, or to analyse the blockchain comprehensively enough to see through most attempts at bitcoin laundering.

Not that I actually have a problem with any of that. And honestly, if secret government agencies lean on other secret government agencies in order to create an effective and efficient online currency to fight crime, that’s probable a win-win as far as I’m concerned. One concern I guess I have though, is that if you assume a bunch of law-enforcement cryptonerds build bitcoin, is that they might also have a way of “turning it off” — perhaps a real compromise in the crypto that means they can easily create forks of the blockchain and make bitcoins useless, or just enough processor power that they can break it by bruteforce, or even just some partial results in how to break bitcoin that would destroy confidence in it, and destroy the value of any bitcoins. It’d be fairly risky to know of such a flaw, and trust that it wouldn’t be uncovered by the public crypto research community, though.

All that said, if you ignore the criminal and megalomaniacal ideas for bitcoin, and assume the crypto’s sound, it’s pretty interesting. At the moment, a satoshi is worth 5/10,000ths of a cent, which would be awesome for microtransactions if the transaction fee wasn’t at 5c. Hmm, looks like dogecoin probably has the right settings for microtransactions to work. Maybe I should have another go at the pay-per-byte wireless capping I was thinking of that one time… Apart from microtransactions, some of the conditional/multiparty transaction possibilities are probably pretty interesting too.

March 30, 2014 01:00 PM

Adrian SuttonTesting@LMAX – Distributed Builds with Romero

LMAX has invested quite a lot of time into building a suite of automated tests to verify the behaviour of our exchange. While the majority of those tests are unit or integration tests that run extremely fast, in order for us to have confidence that the whole package fits together in the right way we have a lot of end-to-end acceptance tests as well.

These tests deliver a huge amount of confidence and are thus highly valuable to us, but they come at a significant cost because end-to-end tests are relatively time consuming to run. To minimise the feedback cycle we want to run these tests in parallel as much as possible.

We started out by simply creating separate groupings of tests, each of which would run in a different Jenkins job and thus run in parallel. However as the set of tests changed over time we kept having to rebalance the groups to ensure we got fast feedback. With jobs finishing at different times they would generally also pick different revisions to run against so we generally weren’t getting any revision that all the tests had run against, reducing confidence in the specific build we picked to release to production each iteration.

To solve this we’ve created custom software to run our acceptance tests which we call Romero. Romero has three parts:

At the start of a test run, the revision to test is deployed to all servers, then Romero loads all the tests for that revision and begins allocating one test to each agent and assigning that agent a server to run against. When an agent finishes running a test it reports the results back to the server and is allocated another test to run. Romero also records information about how long a test takes to run and then uses that to ensure that the longest running tests are allocated first, to prevent them “overhanging” at the end of the run while all the other agents sit around idle.

To make things run even faster, most of our acceptance test suites are able to be run in parallel, running multiple test cases at once and also sharing a single server with multiple Romero agents. Some tests however are testing functions which affect the global state of the server and can’t share servers. Romero is able to identify these types of tests and use that information when allocating agents to servers.  Servers are designated as either parallel, supporting multiple agents, or sequential, supporting only a single agent. At the start of the run Romero calculates the optimal way to allocate servers between the two groups, again using historical information about how long each test takes.

All together this gives us an acceptance test environment which is self-balancing – if we add a lot of parallel tests one iteration servers are automatically moved from sequential to parallel to minimise the run time required.

Romero also has one further trick up its sleeve to reduce feedback time – it reports failures as they happen instead of waiting until the end of the run. Often a problematic commit can be reverted before the end of the run, which is a huge reduction in feedback time – normally the next run would already have started with the bad commit still in before anyone noticed the problem, effectively doubling the time required to fix.

The final advantage of Romero is that it seamlessly handles agents dying, even in the middle of running a test and reallocates that test to another agent, giving us better resiliency and keeping the feedback cycle going even in the case of minor problems in CI. Unfortunately we haven’t yet extended this resiliency to the test servers but it’s something we would like to do.

March 30, 2014 02:48 AM

March 25, 2014

Adrian SuttonTesting@LMAX – Test Results Database

One of the things we tend to take for granted a bit at LMAX is that we store the results of our acceptance test runs in a database to make them easy to analyse later.  We not only store whether each test passed or not for each revision, but the failure message if it failed, when it ran, how long it took, what server it ran on and a bunch of other information.

Having this info somewhere that’s easy to query lets us perform some fairly advanced analysis on our test data. For example we can find tests that fail when run after 5pm New York (an important cutoff time in the world of finance) or around midnight (in various timezones). It has also allowed us to identify subtly faulty hardware based on the increased rate of test failures.

In our case we have custom software that distributes our acceptance tests across the available hardware so it records the results directly to the database, however we have also parsed JUnit reports from XML and imported into the database that way.

However you get the data, having a historical record of test results in a form that’s easy to query is a surprisingly powerful tool and worth the relatively small investment to set up.

March 25, 2014 10:54 AM

Blue HackersRude vs. Mean vs. Bullying: Defining the Differences

March 25, 2014 01:16 AM

March 23, 2014

Adrian SuttonRevert First, Ask Questions Later

The key to making continuous integration work well is to ensure that the build stays green – ensuring that the team always knows that if something doesn’t work it was their change that broke it. However, in any environment with a build pipeline beyond a simple commit build, for example integration, acceptance or deployment tests, sometimes things will break.

When that happens, there is always a natural tendency to commit an additional change that will fix it. That’s almost always a mistake.

The correct approach is to revert first and ask questions later. It doesn’t matter how small the fix might be there’s a risk that it will fail or introduce some other problem and extend the period that the tests are broken. However since we know the last build worked correctly, reverting the change is guaranteed to return things to working order. The developers working on the problematic change can then take their time to develop and test the fix then recommit everything together.

Reverting a commit isn’t a slight against its developers and doesn’t even suggest the changes being made are bad, merely that some detail hasn’t yet been completed and so it’s not yet ready to ship. Having a culture where that’s understood and accepted is an important step in delivering quality software.

March 23, 2014 09:51 AM

March 22, 2014

Anthony TownsBeanBag — Easy access to REST APIs in Python

I’ve been doing a bit of playing around with REST APIs lately, both at work and for my own amusement. One of the things that was frustrating me a bit was that actually accessing the APIs was pretty baroque — you’d have to construct urls manually with string operations, manually encode any URL parameters or POST data, then pass that to a requests call with params to specify auth and SSL validation options and possibly cookies, and then parse whatever response you get to work out if there’s an error and to get at any data. Not a great look, especially compared to XML-RPC support in python, which is what REST APIs are meant to obsolete. Compare, eg:

server = xmlrpclib.Server("http://foo/XML-RPC")
print server.some.function(1,2,3,{"foo": "bar"})

with:

base_url = "https://api.github.com/"
resp = requests.get(base_url + "/repos/django/django")
if resp.ok:
    res = resp.json()
else:
    raise Exception(r.json())

That’s not to say the python was is bad or anything — it’s certainly easier than trying to do it in shell, or with urllib2 or whatever. But I like using python because it makes the difference between pseudocode and real code small, and in this case, the xmlrpc approach is much closer to the pseudocode I’d write than the requests code.

So I had a look around to see if there were any nice libraries to make REST API access easy from the client side. Ended up getting kind of distracted by reading through various arguments that the sorts of things generally called REST APIs aren’t actually “REST” at all according to the original definition of the term, which was to describe the architecture of the web as a whole. One article that gives a reasonably brief overview is this take on REST maturity levels. Otherwise doing a search for the ridiculous acronym “HATEOAS” probably works. I did some stream-of-consciousness posts on Google-Plus as well, see here, here and here.

The end result was I wrote something myself, which I called beanbag. I even got to do it mostly on work time and release it under the GPL. I think it’s pretty cool:

github = beanbag.BeanBag("https://api.github.com")
x = github.repos.django.django()
print x["name"]

As per the README in the source, you can throw in a session object to do various sorts of authentication, including Kerberos and OAuth 1.0a. I’ve tried it with github, twitter, and xero’s public APIs with decent success. It also seems to work with Magento and some of Red Hat’s internal tools without any hassle.

March 22, 2014 08:47 AM

March 20, 2014

Blue HackersOn inclusiveness – diversity

Ashe Dryden writes about Dissent Unheard Of – noting “Perhaps the scariest part of speaking out is seeing the subtle insinuation of consequence and veiled threats by those you speak against.”

From my reading of what goes on, much of it is not even very subtle, or veiled. Death and rape threats. Just astonishingly foul. This is not how human beings should treat each other, ever. I have the greatest respect for Ashe, and her courage in speaking out and not being deterred. Rock on Ashe!

The reason I write about it here on BlueHackers.org is that I think there is a fair overlap between issues of harassment and other nasties and depression, and it will affect individuals, companies, conferences and other organisations alike. So it’s important to call it out and actually deal with it, not regard it as someone else’s problem.

Our social and work place environments need to be inclusive for all, it’s as simple as that. Inclusiveness is key to achieving diversity, and diverse environments are the most creative and innovative. If a group is not diverse, you’re missing out in many ways, personally as well as commercially.

Please read Ashe’s thoughtful analysis of what causes people and places to often not be inclusive – such understanding is a good step towards effecting change. Is there something you can do personally to improve matters in an organisation? Please tell about your thoughts, actions and experiences. Let’s talk about this!

March 20, 2014 05:37 AM

March 18, 2014

Adrian SuttonThe truth is out: money is just an IOU

The truth is out: money is just an IOU, and the banks are rolling in it:
What the Bank of England admitted this week is that none of this is really true. To quote from its own initial summary: “Rather than banks receiving deposits when households save and then lending them out, bank lending creates deposits” … “In normal times, the central bank does not fix the amount of money in circulation, nor is central bank money ‘multiplied up’ into more loans and deposits.” In other words, everything we know is not just wrong – it’s backwards. When banks make loans, they create money.
The complexity and subtlety in our financial system is delightful and frightening all at the same time. Almost everything has multiple contributing factors that are impossible to isolate and subtle shifts in world view like this can have huge implications on decision making.

March 18, 2014 11:50 PM

March 02, 2014

Ben MartinThe vs1063 untethered

Late last year I started playing with the vs1063 DSP chip, making a little Audio Player. Quite a lot of tinkering is available for in/output on such a project, so it can be an interesting way to play around with various electronics stuff without it feeling like 10 das blinken lights tutorials in a row.

Over this time I tried using a 5 way joystick for input as well as some more unconventional mechanisms. The Blackberry Trackballer from SparkFun is my favourite primary input device for the player. Most of the right hand side board is to make using that trackballer simpler, boiling it down to a nice I2C device without the need for tight timing or 4 interrupt lines on the Arduino.




The battery in the middle area of the screen is the single 16850 protected battery cell that runs the whole show. The battery leads via a switch to a 3.7v -> 5v step up to provide a clean power input. Mid way down the right middle board is a low dropout 3v3 regulator from which the bulk of the board is running. The SFE OLED character display wants its 5v, and the vs1063 breakout board, for whatever reason, wants to regulate its own 3v3 from a 5v input. Those are the two 5v holdouts on the board.

Last blog post I was still using a Uno as the primary microcontroller. I also got rid of that Uno and moved to an on board 328 running at 8Mhz and 3v3. Another interesting leaning experience was finding when something 'felt' like it needed a cap. At times the humble cap combo is a great way to get things going again after changing a little part of the board. After the last clean up it should all fit onto 3 boards again, so might then also fit back into the red box. Feels a bit sad not to have broken the red box threshold anymore :|

Loosely the board comes out somewhat like the below... there are things missing from the fritzing layout, but its a good general impression. The trackballer only needs power, gnd, I2C, and MR pin connections. With reasonable use of SMD components the whole thing should shrivel down to the size of the OLED PCB.

Without tuning the code to sleep the 328 or turn off the attin84 + OLED screen (oled is only set all black), it uses about 65mA while playing. I'm running the attiny84 and OLED from an output pin on the 4050 level shifter, so I can drop power to them completely if I want. I can expect above 40hrs continuous play without doing any power optimization. So its all going to improve from there ;)

March 02, 2014 01:33 PM

February 18, 2014

Daniel DevineSnorkelling at Perth

I was in Perth for linux.conf.au 2014 and for a place that I've heard sucks, I'm mostly seeing the opposite.

/blog_images/perth_1b.jpg

The view from near Mudurup Rocks.

Like Dirk Hohndel's talk, this post is not so much about technology as it is about the sea. I meant to write this post before I had even left Perth but I ended up getting sidetracked by LCA and general horseplay.

After Kate Chapman's enjoyable keynote I decided to blow that popsicle stand and snorkel at Cottesloe Reef (pdf) which is a FHPA (Fish Habitat Protection Area) just 30 minutes from the conference venue by bus.

Read more…

February 18, 2014 08:30 AM

February 10, 2014

Adrian SuttonInterruptions, Coding and Being a Team

Esoteric Curio – 折伏 – as it relates to coding
Our job as a team is to reinforce each other and make each other more productive. While I attempt to conquer some of the engineering tasks as well as legal tasks, and human resource tasks I face, I need concentration and it world certainly help to not be interrupted. I used to think I needed those times of required solitute pretty much all the time. It turns out that I have added far more productivity by enabling my teams than I have personally lost due to interruptions, even when it was inconvenient and frustrating. So much so that Ive learned to cherish those interruptions in the hope that, on serendipitous occasion, they turn into a swift, sprititual kick to the head. After all, sometimes all you need for a fresh persective is to turn out the light in the next room; and, of course, to not have avoided the circumstance that brought you to do it.
Too often we confuse personal productivity, or even just the impression of it, for team productivity. So often though having individuals slow down is the key to making the team as a whole speed up.  See also Go Faster By Not Working and The Myth of Measurable Productivity.

February 10, 2014 03:21 AM


Last updated: September 30, 2014 03:45 PM. Contact Humbug Admin with problems.