HUMBUG logo


HUMBUGers

Feed: rss rdf opml

November 23, 2014

Ben MartinTerry: Updated Top Shelf

The Kinect is now connected much closer to the tilt axis, giving a much better torque to hold ratio from the servo gearbox. I used some self tapping screws to attach the channel to the bottom of the Kinect. Probably not the cleanest solution but it appears to mount solidly and then you get to bolt that channel to the rest of the assembly. For a closer look the Logitech 1080 webcam is mounted offset from the Kinect. This should give an enjoyable time using the 1080 RGB data and combining the VGA depth mask from the Kinect into a point cloud.


The camera pan/tilt is now at the front of the top shelf and a robot arm is mounted at the back of the shelf. The temptation is high to move the arm onto a platform that is mounted using threaded rod to the back of Terry. All sorts of fun and games to be had with automated "pick up" and move tasks! Also handy for some folks who no longer enjoy having to pick items up from the ground. The camera pan/tilt can rotate around to see first hand what the arm is doing, so to speak.


The wheel assembly is one area that I'm fairly happy with. The yumo rotary encoder runs 1024 P/R and it is attached using an 8:1 down ratio to give an effective "ideal world" 13 bit precision. Yes, there are HAL effect ICs that give better precision, though they don't look as cool ;) The shaft of the motor is the axle for the wheel. It is handy that the shaft is not right in the centre of the motor because you can rotate the motor to move the wheel through an arc, and thus adjust the large alloy gear until it nicely mates with the brass gear on the rotary encoder.



Lower down near the wheels is a second distance sensor which is good for up to around 80cm distance. The scan rate is much slower than the Kinect however.


Things are getting very interesting now. A BeagleBone Black, many Atmel 328s on board, and an Intel j1900 motherboard to run the SLAM software.

November 23, 2014 08:25 AM

October 31, 2014

Ben MartinTerry 2.0 includes ROS!

What started as a little tinker around the edges has resulted in many parts of Terry being updated. The Intel j1900 motherboard is now mounted in the middle of the largest square structure, and SSD is mounted (the OCZ black drive at the bottom), yet another battery is mounted which is a large external laptop supply, the Kinect is now mounted on the pan and tilt mechanism along with the 1080p webcam that was previously there. The BeagleBone Black is moved to its own piece of channel and a breadboard is sunk into the main 2nd top level channel.


I haven't cabled up the j1900 yet. On the SSD is Ubuntu and ROS including a working DSLAM (strangely some fun and games getting that to compile and then to not segv right away).

I used 3 Actobotics Beams, one IIRC is a 7.7 incher and two shorter ones. The long beam actually lines on for the right side of the motherboard that you see in the image. The beam is attached with nylon bolts and has a 6.6mm standoff between the motherboard and the beam to avoid any undesired electrical shorts. With the two shorter beams on the left side of the motherboard it is rather securely attached to Terry now. The little channel you see on the right side up a little from the bottom is there for the 7.7 inch beam to attach to (behind the motherboard) and there is a shorter beam on this side to secure the floating end of the channel to the base channel.



The alloy structure at the top of the pan and tilt now has a Kinect attached. I used a wall mount plastic adaptor which with great luck and convenience the nut traps lined up to the actobotics holes. I have offset the channel like you see so that the center of gravity is closer to directly above the pan and tilt. Perhaps I will have to add some springs to help the tilt servo when it moves the Kinect back too far from the centre point. I am also considering a counter balance weight below the tilt which would also work to try to stabilize the Kinect at the position shown.



I was originally planning to put some gripper on the front of Terry. But now I'm thinking about using the relatively clean back channel to attach a threaded rod and stepper motor so that the gripper can have access to the ground and also table top. Obviously the cameras would have to rotate 180 degrees to be able to see what the gripper was up to. Also for floor pickups the tilt might have to be able to handle a reasonable downward "look" without being too hard on the servo.

There were also some other tweaks. A 6 volt regulator is now inlined into a servo extension cable and the regulator is itself bolted to some of the channel. Nice cooling, and it means that the other end of that servo extension can take something like 7-15v and it will give the servo the 6v it wants. That is actually using the same battery pack as the drive wheels (8xAA).

One thing that might be handy for others who find this post, the BeagleBone Black Case from sparkfun attaches to Actobotics channel fairly easily. I used two cheesehead m3 nylocks and had to force them into the enclosure. The nylocks lined up to the Actobotics channel and so the attachment was very simple. You'll want a "3 big hole" or more bit of channel to attach the enclosure to. I attached it to a 3 bit holer and then attaced that channel to the top of Terry with a few threaded standoffs. Simplifies attach and remove should that ever be desired.

I know I need slip rings for the two USB cameras up top. And for the tilt servo as well. I can't use a USB hub up top because both the USB devices can fairly well saturate a USB 2.0 bus. I use the hardware encoded mjpeg from the webcam which helps bandwidth, but I'm going to give an entire USB 2.0 bus to the Kinect.

October 31, 2014 08:27 AM

October 21, 2014

Adrian SuttonSo you want to write a bash script…

Before writing any even half serious bash script, stop and read:

Any other particularly good articles on writing reliable bash scripts that should be added to this list?

October 21, 2014 02:43 PM

October 15, 2014

Ben MartinSliding around... spinning around.

The wiring and electronics for the new omniwheel robot are coming together nicely. Having wired this up using 4 individual stepper controllers, one sees the value in commissioning a custom base board for the stepper drivers to plug into. I still have to connect an IMU to the beast, so precision strafing will (hopefully) be obtainable. The sparkfun mecanum video has the more traditional two wheels each side design, but does wobble a bit when strafing.


Apart from the current requirements the new robot is also really heavy, probably heavier than Terry. I'm still working out what battery to use to meet the high current needs of four reasonable steppers on the move.

October 15, 2014 01:23 PM

October 10, 2014

Blue HackersBlueHackers @ Open Source Developers’ Conference 2014

This year, OSDC’s first afternoon plenary will be a specific BlueHackers related topic: Stress and Anxiety, presented by Neville Starick – an experienced Brisbane based counsellor.

We’ll also have our traditional BlueHackers “BoF” (birds-of-a-feather) session in the evening, usually featuring some general information, as well as the opportunity for safe lightning talks. Some people talk, some people don’t. That’s all fine.

The Open Source Developers’ Conference 2014 is being held at the beautiful Griffith University Gold Coast Campus, 4-7 November. It features a fine program, and if you use this magic link you get a special ticket price, but the regular registration is only around $300 anyhow, $180 for students! This includes all lunches and the conference dinner. Fabulous value.

October 10, 2014 05:47 AM

October 08, 2014

Ben FowlerBloody LILO!

I replaced a disk in my server today. It has a fairly complicated disk setup, and I had to replace the first disk in the array, which also has a (non-RAID) boot partition. The disk replacement and array rebuild went fine. But I hit a snag, when trying to figure out how to successfully re-run LILO. Would somebody mind explaining to me how "Fatal: Incompatible Raid version information on /dev/md0

October 08, 2014 12:57 AM

October 04, 2014

Adrian SuttonDon’t Make Your Design Responsive

Every web based product is adding “responsive design” to their feature lists. Unfortunately in many cases that responsive design is actually making their product much harder to use on a variety of screen sizes instead of easier.

The problem is that common CSS libraries and grid systems, including the extremely popular bootstrap, imply that a design can be made responsive using fixed cut-off points and the design just automatically adjusts. In reality making a design responsive requires tailoring the cut off points so that the design adjusts at the points where it stops working well.

For example, let’s look at the bootstrap documentation and in particular the top menu it uses. The bootstrap documentation gets it right, we can shrink the window down right to the point where the menus only just fit and they stick to the full size design:

Menu remains full-size for as long as it can fit.

Menu remains full-size for as long as it can fit.

If we shrink the window further the menus wouldn’t fit anymore so they correctly switch to the hamburger style:

Menu collapses once it would no longer fit.

Menu collapses once it would no longer fit.

That’s the right way to do it. The cutoff points have been specifically tailored for the content. There are other stylistic changes as well as these structural ones – the designer believes that centred text works better for headings on smaller screens for example. That’s fine, they’re fairly arbitrary design decisions based on what the designer believes looks best. I’m focussed on structural issues.

To see what happens if we ignore let’s pretend that we add a new menu item:

Our "New Item" shown correctly when the window is wide enough.

Our “New Item” shown correctly when the window is wide enough.

But now when we shrink the window down, the breakpoint is in the wrong place:

The "New Item" menu no longer fits but causes incorrect wrapping because the break point is wrong.

The “New Item” menu no longer fits but causes incorrect wrapping because the break point is wrong.

Now the design breaks as we shrink the window because the break point hasn’t been specifically tailored to the content. This is the type of error that regularly happens when people think that a responsive grid system can automatically make their site responsive. The repercussions in this case aren’t particularly bad, but it can be significantly worse.

Recently Jenkins released a rewrite of their UI moving it to using bootstrap which has unfortunately gotten responsive design completely wrong (and sadly I’m yet to see anything it’s actually improved). Browser widths that used to work perfectly well with the desktop only site are now treated as mobile browsers and content wraps into a long column. What’s worse, the most useless content, the sidebar, is what’s shown at the top with the main content pushed way down the page. At other widths the design doesn’t fit but doesn’t wrap leaving some links completely inaccessible.

It would be much better if people stopped jumping on the responsive design bandwagon and just designed their site to work for desktop browsers unless they are prepared to fully invest and do responsive design right. Mobile browsers are designed to work well with sites designed for desktop and have lots of tools and techniques for coping with them. As you add responsive design and other adjustments designed for mobile you take responsibility for making the design work well everywhere from the browser onto yourself.  Using predefined breakpoints from a CSS library is unlikely to give the result you intend. It would be nice if CSS libraries stopped claiming that it will.

October 04, 2014 04:17 AM

September 26, 2014

Adrian SuttonSafely Encoding Any String Into JavaScript Code Using JavaScript

When generating a JavaScript file dynamically it’s not uncommon to have to embed an arbitrary string into the resulting code so it can be operated on. For example:

function createCode(inputValue) {
return "function getValue() { return '" + inputValue + "'; }"
}

This simplistic version works great for simple strings:

createCode("Hello world!");
// Gives: function getValue() { return 'Hello world!'; }

But breaks as soon as inputValue contains a special character, e.g.

createCode("Hello 'quotes'!");
// Gives: function getValue() { return 'Hello 'quotes' !'; }

You can escape single quotes but it still breaks if the input contains a \ character. The easiest way to fully escape the string is to use JSON.stringify:

function createCode(inputValue) {
return "function getValue() { return " +
JSON.stringify(String(inputValue)) +
"; }"
}

Note that JSON.stringify even adds the quotes for us. This works because a JSON string is a JavaScript string, so if you pass a string to JSON.stringify it will return a perfectly valid JavaScript string complete with quotes that is guaranteed to evaluate back to the original string.

The one catch is that JSON.stringify will happily JavaScript objects and numbers, not just strings, so we need to force the value to be a string first – hence, String(inputValue).

 

September 26, 2014 01:21 AM

September 22, 2014

Adrian SuttonSoftware is sometimes done

In Software is sometimes done Rian van der Merwe makes the argument that we need more software that is “done”:

I do wonder what would happen if we felt the weight of responsibility a little more when we’re designing software. What if we go into a project as if the design we come up with might not only be done at some point, but might be around for 100 years or more? Would we make it fit into the web environment better, give it a timeless aesthetic, and spend more time considering the consequences of our design decisions?

It’s an interesting question – if we had to get things right the first time, would we do a better job? If our design decisions were set in stone for all time would things be better?

The problem is, we’ve already asked this question and decided that in fact designing things up front and setting it in stone doesn’t work as well as releasing early and often with short feedback cycles so that we can adjust as we go. It’s waterfall vs agile and it turns out agile wins.

That said, there’s a difference between being rushing a sloppy job out the door and doing things well with an iterative cycle to adjust to learning.  An short feedback loop is there to let you learn and improve, not to let you release any old thing and get away with it. We need more software developed by doing the best job possible with the information available, combined with a short feedback cycle to gather more information and continually raise the stakes for what’s possible.

It can be romantic to look back and think that we used to do a better job because things were more permanent, that software used to be done, but it’s just not true:

When Windows 95 came out, it was done. Yes, there were some patches to it, but they were few and far between, and in general quite difficult to come by. But of course, then the Internet and App Stores happened in full force, and suddenly we decided that “Software is never done”. In some sense this is certainly true. There are always bugs to fix, things to improve, more features to add, unused features to remove — and of course, the SaaS model makes it all so easy. But I wonder if we’ve taken this a bit too far.

Windows 95 may have a been done, but Windows was not. Otherwise we’d still be running Windows 95. We’re kidding ourselves if we think that anyone at Microsoft ever thought that Windows 95 would be the last thing they ever released, that the OS would never change in the future.

Even if we consider Windows 95 as a standalone thing that is “done”, would you run it today? Of course not, by modern standards it’s horrible. The same is true of every other piece of unmaintained software I can think of, it may have been good enough or even the best for a long time after it became unmaintained but eventually it falls behind. Eventually it stops being “done” in the sense that it doesn’t need any further work and becomes “done” in the sense that no one uses it anymore.

 

September 22, 2014 12:26 AM

September 21, 2014

Adrian SuttonCI Isn’t a To-do List

When you have automated testing setup with a big display board to provide clear feedback, it’s tempting to try and use it for things it wasn’t intended for. One example of that is as a kind of reminder system – you identify a problem somewhere that can’t be addressed immediately but needs to be handled at some point in the future. It’s tempting to add a test that begins failing after a certain date (or the inverse, whitelisting errors until a certain date). Now the build is green and there’s no risk of you forgetting to address the problem because it will fail again in the future. Perfect right?

The problem is that it sacrifices the ability to easily isolate the cause of a failure. You can no longer revert changes to get back to a working baseline because the current time is a factor in whether your tests pass or not. You can no longer run historical builds through CI which you may need to do as part of supporting clients on older versions.

It also inevitably leads to false failures as the time points are almost always arbitrary and estimated time lines for completing tasks often slip. So the board goes red and the response is to just suppress it again.

Continuous integration isn’t there to remind you to do something – it’s there to tell you if your build is ok to ship to production or not. It doesn’t make sense to say that a build is ok to ship to production now but not ok to ship in a week’s time. If production is going to blow up in the future because of a bug in the code you want to the board to be red now, not when production actually blows up. And if the client is prepared to accept the risk and leave the problem unfixed then it’s not yet a requirement and doesn’t need tests asserting it – just put a card in the backlog for the client to prioritise and play when required.

If you need to remember to do something, either put a task in the backlog or on the current story board to do it or put a reminder in your calendar. Keep CI focussed on telling you about the state of your code right now – if it’s broken it should be red, if it’s not broken it should be green. Not broken until next week should never be an option.

 

September 21, 2014 11:59 PM

September 16, 2014

Blue HackersQuitting cigarettes

Why is it having mental illness and smoking go hand in hand. I’m looking closer to getting a foot in the door with my dream job doing something I love. I also plan to quit smoking again with my fiance and two friends. It’s not going to be easy but I went seven months not smoking. I really don’t enjoy it and it costs too much. That and every cigarette takes eight minutes off my life. I’ll keep you readers of bluehackers posted about the job. I’m excited..

September 16, 2014 05:03 PM

September 14, 2014

Blue HackersMore exercise and happiness

So I managed to last one hour the.other say doing boxing related exercises. After I had cooled down I felt great. Those endorphins are so good for me. I want more but I’m not good at wanting to exercise..

September 14, 2014 07:55 PM

September 10, 2014

Blue HackersLosing weight

Today I officially have begun trying to lose weight. Medications and poor lifestyle choices have put me at very.high risk of diabetes. I also have high cholesterol. Not good. I did 30 minutes of intense boxing exercise. Thanks Ben S. He’s my best man for my wedding and a friend for roughly 8 years.

September 10, 2014 08:39 AM

Ben MartinGetting a feel for Metapolator and Cascading Parameter Values


Metapolator is a new project for working on font families. It allows you to generate many fonts in a family from a few "master" cases. For example, you could have a normal font, modify it to create a rather bold version and then use metapolator to generate 10 shades on the line between normal and bold.

I have recently been reading up on metapolator and how to hack on it. So this post describes my limited understanding of what is a fairly young project. So warnings inline with that are in place; errors are my own, the code is the final arbitrator etc.

Much of the documentation of Metapolator involves the word Meta, so I'm going to drop it off this post as seeing it all the time removes its value in terms of semantic add.

At the core of all of this polating are parameters. For example, after importing a font and calling it "normal" you might assign a value of 100 to xheight. I am assuming that many of the spline points in the glyph (skeleton) can then be defined in terms of this xheight. So the top of the 'n' might be 0.95*xheight.

A system using much the same syntax as Cascading Style Sheets is available to allow parameter values to be set or updated. Because its parameters, its called CPS instead of CSS. So you might select a glyph like 'glyph#n' and then set its xheight to be 105 instead.It seems these selectors go right down to the individual point if that's interesting.

In order to understand the CPS system I decided to start modifying a basic example and trying to get specific values back out of the CPS system. The description of this is mainly to see if my playing around was somewhat along the lines of the intended use of the CPS system.

For this I use a very basic CPS

$ cat /tmp/basic.cps
 
* {
     label : 1234;
     xx    : 5;
}

glyph#y penstroke:i(0) point:i(0) {
     xx    : 6;
}

$ metapolator dev-playground-cps /tmp/basic.cps

The existing dev-playground-cps command makes its own fonts up so all you need is a CPS file that you want to apply to those fonts. In my case I'm using two new properties, the label and 'xx' which are of type string and number respectively.

A default value of 3 is assigned to xx for all points and each point and glyph get a unique label during setup.

I found it insightful to test the below with and without selectors that modify the 'xx' property in the CPS, and at both levels. That is, changing the xx:5 and xx:6 in the above CPS to be xxno1:5 and xxno2:6 and seeing what the below printed out. The xx.value makes the most sense to me, show me the default value (3) if nothing is set in any CPS to override it or show me what the CPS has set if it did any override for the point.

element = controller.query('master#heidi glyph#y penstroke:i(0) point:i(0)')
console.log('element:', element.particulars);
console.log('element:', element.label);
console.log('element:', element.xx);
computed = controller.getComputedStyle(element)
console.log('label: ' + computed.get('label'));
console.log('xx.base   : ' + computed.getCPSValue('xx'));
console.log('xx.updated: ' + computed.getCPS('xx'));
console.log('xx.value  : ' + (computed.getCPS('xx') ? computed.getCPS('xx') : computed.getCPSValue('xx')));


The above code is also pushed to a branch of my mp fork at cps.js#L213

I found that a little tinkering in StyleDict.js was needed to get things to operate how I'd expected which is most likely because I'm using it wrong^tm.
The main thing was changing getCPSValue to test for a local entry for a parameter before using the global default StyleDict.js#L93.

I might look at adding a way to apply a CPS to a named font and showing the resulting font as pretty json. For reference this will likely have value and valuebase showing the possibly CPS updated value and the value from the original font respectively.

September 10, 2014 05:03 AM

September 09, 2014

Blue Hackers7 Things You Shouldn

http://www.huffingtonpost.com/2014/02/17/things-not-to-say-to-some_n_4781182.html

If you’ve ever suffered from severe anxiety, you’re probably overly familiar with the control it can have over your life. And you’re not alone — it affects a sizeable percentage of the population.

Learning more about anxiety and stress can be really helpful.

September 09, 2014 11:20 PM

September 07, 2014

Blue HackersAnxiety Attack felt like Heart Attack :-(

I had my first anxiety attack the other day. My lady was off picking a wedding dress. I was looking after our son whom was asleep and it just came on. I got our neighbour who luckily was home that day in his garage working on his cars. It felt like a heart attack. It stopped me from moving my right shoulder properly. I guess now I know I need help, more than a psychiatrist can help with (as they only prescribe medication if you did not know already). I’m looking into a Physiologist. I pay for private health with an awesome company called ahm. I don’t ever want to return to needing to go into hospital though. This will be a short post, but never discredit anyone who says they suffer anxiety as it’s a serious thing that causes actual physical pain. It wasn’t until the GP gave me the all clear I felt better again. Oh, and now I wear glasses as I’m short sighted from many years of looking at computer screens.

September 07, 2014 01:22 AM

September 02, 2014

Blue HackersFollow up

I have a mental illness. From consuming weed for those years. I have major depression & anxiety. I also get paranoid about germs/what people think of me/my health. I think sometimes I make things worse for myself. The best thing that has ever happened is meeting my lovely Becci. She definitely has taken my unwell self and made me well. I had long quit the weed. But recovering from heavy usage takes the brain a while. Years in fact. I have been in and out of work. Fired for having a mental illness (CBA) and more recently as in last year my mother doused herself in gasoline and set herself alight. I haven’t walked easy street. But I try to keep my head up and wits about me. I have a family to care for an my grandparents who helped raise me quite a bit. Well a lot.

September 02, 2014 01:00 PM

Blue HackersA bit about “jlg”

I’m 29, Male from Sunny Brisbane (sunny at the moment). I was born in Adelaide, SA, Australia in a hospital called Modbury hospital. It’s still there. I have one son. I also have a daughter who by law I am not legally allowed to see as I am not on the birth certificate but I’m 99.99 percent sure I’m her father. Her name for the record is Annabel. I’m unsure of spelling. Our son (mine and Bec’s) is being raised with so much love and care and I only wish the same for my daughter. I should mention I’m no street thug or criminal. I actually have no criminal record. I survive on $500AUD a fortnight currently as of right this moment. Which is not much for a overweight male. I don’t really have any vices per say but I don’t use computers so frequently at my age of 29 I have short sighted vision. I should mention I don’t have diabetes.

My story is a common one I think? Man meets woman (Steve Cullen) my bio dad. Has sex, finds out has baby and does runner. I have to this day never met my bio dad. I have seen a photo when I was younger. He was some bald dude. I don’t think much of  him and I actually don’t speak much of him. My mother was awesome and she still is, albeit after her last suicide attempt. I will get to this later. I should mention I was a heavy smoker of cannabis from 2003 to 2007.  I attended a place called HUMBUG. Ironically it was a friend I made called Daniel who got me into marijuna. He would write code, I’d hack computers. We kind of worked as a team. Because the trust was set by us consuming so much (I will call it weed). I’m not proud of my drug usage but little did I know my Mum was a heavy user of other “drugs”. She was also in the army. For roughly 6 years she taught Army service men and women about english/maths etc. As she was a UniSA educated teacher. I on the other hand am self taught. From a young age I was somewhat unwillingly writing phish attacks but for chat websites. I would call these fake logins via HTML. I did this all roughly during High School. I admit freely that the school network was a joke. That doesn’t mean I abused it. I just made sure I couldn’t use their computers by inputting ASCII characters alt+256 the invisible char into the login screen I was using. It was Novell and it would not log in if you entered this char quickly without teacher looking then you’d get moved say to a girl you liked and flirt with her…. :-o)

For the sake of keeping things realistic and true I was actually very frigid. I dated some real nice girls I just couldn’t even get any courage to do anything more than sitting near them. That obviously changed in my final few years. I have always been anti-authority because I actually had 0 parent supervision for most of my teens. I would sit infront of my IBM Aptiva listening to god awful rap music I won’t mention online. I would sit reading RFCs, reading how to write HTML then thinking outside the box and doing what’s now called XSS (aka cross site scripting). Yahoo was one I did, obviously I have never been the type of guy to go hey here’s my handle and here I am LEA track me down. I prefer doing these things without an identity I always have, always will. I am a free lance individual. Whilst I sympathize with various well known hacktivists. I do not go out of my way to engage them.

I think this is enough for now…I will update soon. It’s 10.37pm I know not that late but all this writing has exhausted me. More later.

September 02, 2014 12:37 PM

August 28, 2014

Ben MartinTerry is getting In-Terry-gence.

I had hoped to use a quad core ARM machine running ROS to spruce up Terry the robot. Performing tasks like robotic pick and place, controlling Tiny Tim and autonomous "docking". Unfortunately I found that trying to use a Kinect from an ARM based Linux machine can make for some interesting times. So I thought I'd dig at the ultra low end Intel chipset "SBC". The below is a J1900 Atom machine which can have up to 8Gb of RAM and sports the features that one expects from a contemporary desktop machine, Gb net, USB3, SATA3, and even a PCI-e expansion.


A big draw to this is the "DC" version, which takes a normal laptop style power connector instead of the much larger ATX connectors. This makes it much simpler to hookup to a battery pack for mobile use. The board runs nicely from a laptop extension battery, even if the on button is a but funky looking. On the left is a nice battery pack which is running the whole PC.

An interesting feature of this motherboard is no LED at all. I had sort of gotten used to Intel boards having blinks and power LEDs and the like.
There should be enough CPU grunt to handle the Kinect and start looking at doing DSLAM and thus autonomous navigation.

August 28, 2014 12:15 PM

August 27, 2014

Daniel DevineMailbag: Windows Developer Program for IoT

/blog_images/galileo/6.jpg

So yeah, this is not something you'd expect to see here right?

Basically, I just heard that Microsoft was giving away free Intel Galileo boards and I signed up for shits and giggles and did not really expect to be granted one.

Part of the signup process (which I think is still open) is stating what you plan to build with the board. I can't remember exactly what I wrote, but the one thing I do remember mentioning was investigating security related problems and solutions in the Internet of Things (IoT) space. A month or so later and I still find the topic interesting so that's what I'm going to do.

Read more…

August 27, 2014 07:53 AM

August 26, 2014

Blue HackersAbout your breakfast

We know that eating well (good nutritional balance) and at the right times is good for your mental as well as your physical health.

There’s some new research out on breakfast. The article I spotted (Breakfast no longer ‘most important meal of the day’ | SBS) goes a bit popular and funny on it, so I’ll phrase it independently in an attempt to get the real information out.

One of the researchers makes the point that skipping breakfast is not the same as deferring. So consider the reason, are you going to eat properly a bit later, or are you not eating at all?

When you do have breakfast, note that really most cereals contain an atrocious amount of sugar (and other carbs) that you can’t realistically burn off even with a hard day’s work. And from my own personal observation, there’s often way too much salt in there also. Check out Kellogg’s Cornflakes for a neat example of way-too-much-salt.

Basically, the research comes back to the fact that just eating is not the point, it’s what you eat that actually really does matter.

What do you have for breakfast, and at what point/time in your day?

August 26, 2014 02:00 AM

July 20, 2014

Adrian SuttonFrom Java to Swift

Ever since the public beta of OS X I’ve been meaning to get around to learning Objective-C but for one reason or another never found a real reason to. I’ve picked up bits and pieces of it and even written a couple of working utilities but those were pretty much entirely copy/paste from various sources. Essentially they were small enough and short-lived enough that I only needed the barest grasp of Objective-C syntax and no understanding of the core philosophies and idioms that really make a language what it is. This is probably best exemplified by the approach to memory management those utilities took: it won’t run for long, so just let it leak.

I do however have a ton of experience in Java and JavaScript plus knowledge and experience in a bunch of other languages to a wide range of extents. In other words, I’m not a complete moron, I’m just a complete moron with Objective-C.

Anyway, obviously when Swift was announced it was complete justification for my ignoring Objective-C all these years and interesting enough for me to actually get around to learning it.

So I’ve been building a very small little utility in swift so I can pull out information from OS X’s system calendar from the command line and push it around to various places that I happen to want it and can’t otherwise get it. The code is up on GitHub if you’re interested – code reviews and patches most welcome. It’s been a great little project to get used to Swift the language without spending too much time trying to learn all the OS X APIs.

Language Features

Swift has some really nice language features that make dealing with common scenarios simple and clear. Unlike many languages it doesn’t seem to go too far with that though – it doesn’t seem likely that people will abuse its features and create overly complex or overly succinct code.

My favourite feature is the built-in optional support. A variable of type String is guaranteed to not be null, a variable of type String? might be. You can’t call any methods from String on a String? variable directly you have to unwrap it first – confirming it isn’t null. That would be painful if it weren’t for the ‘if let’ construct:

let events: String? = ""
if
let events = events { events.utf16count()
}

I’ve dropped into a habit here which might be a bit overly clever – naming the unwrapped variable the same as the wrapped one. The main reason for this is that I can never think of a better name. I figure it’s much like have a if events != nil check.

APIs

Calling Swift a new language is correct but it would almost be more accurate to call it a new syntax instead. Swift does have its own core API which is unique to it, but that’s very limited. For the most part you’re actually dealing with the OS X (or iOS) APIs which are shared with Objective-C. Thus, people with experience developing in Objective-C quite obviously have a huge head start with Swift.

The other impact of sharing so many APIs with Objective-C is that some of the benefits of Swift get lost – especially around strict type checking and null reference safety. For example retrieving a list of calendar events is done via the EventKit method:

func eventsMatchingPredicate(predicate: NSPredicate!) -> [AnyObject]!

which is helpfully displayed inside Xcode using Swift syntax despite it being a pre-existing Objective-C API and almost certainly still implemented in Objective-C. However if you look at the return type you see the downside of inheriting the Objective-C APIs: the method documentation says it returns [EKEvent] but the actual declaration is [AnyObject]!  So we’ve lost both type safety and null reference safety because Objective-C doesn’t have non-nullable references or generic arrays. It’s not a massive loss because those APIs are well tested and quite stable so we’re extremely unlikely to be surprised by a null reference or unexpected type in the array, but it does require some casting in our code and requires humans to read documentation and check if something can be null rather than having the compiler do it for us.

If Swift were intended to be a language that competes with Java, python or ruby the legacy of the Objective-C APIs would be a real problem. However, Swift is designed specifically to work with those APIs, to be a relatively small but powerful step of OS X and iOS developers. In that context the legacy APIs are really just a small bump in the road that will smooth out over time.

Xcode

The other really big thing a Java developer notices switching to Swift is what a Java developer notices when switching to any other language – the tools suck. In particular, the Java IDEs are superb these days and make writing, navigating and refactoring code so much easier. Xcode doesn’t even come close. The available refactorings are quite primitive even for C and Objective-C and they aren’t supported at all for Swift.

The various project settings and preferences in Xcode are also a complete mystery – and judging from the various questions and explanations on the internet it doesn’t seem to be all that much clearer even to people with lots of experience. In reality I doubt its really much different to Java which also has a ridiculous amount of complexity in IDE settings. The big difference is that in the Java world you (hopefully) start out by learning the basics using the standard command line tools directly. Doing so gives you a good understanding of the build and runtime setup and makes it much clearer what is controlling how your software is built and what is just setting up IDE preferences. Xcode does provide a full suite of command line developer tools so hopefully I can learn more about them and get that basic understanding.

Finally, Xcode 6 beta 3 is horribly buggy. It’s a beta so I can forgive that but I’m surprised at just how bad it is even for a beta.

Cocoa Pods

This was a delight to stumble across.  Adding a dependency to a project was a daunting prospect in Xcode (jar files are surprisingly brilliant). I really don’t know what it did but it worked and I’m grateful. Libraries that aren’t available as pods are pretty much dead to me now. There does seem to be a pretty impressive array of libraries available for a wide range of tasks. Currently all of them are Objective-C libraries so you have to be able to understand Objective-C headers and examples and convert them to Swift but it’s not terribly difficult (and trivial for anyone with an Objective-C background).

Overall

Swift has a good feel about it – lots of neat features that keep code succinct. Also it’s very hard not to like strict type checking with good type inference. With Apple pushing Swift as a replacement for Objective-C over time the libraries and APIs will become more and more “Swift-like”. Xcode should improve to at least offer the basic refactorings it has for other languages and stabilise which will make it a workable IDE – exceeding the capabilities of what’s available for a lot of languages.

 

Most importantly, the vast majority of existing Objective-C developers seem to quite like it – plenty of issues raised as well, but overall generally positive.

I think the future for Swift looks bright.

July 20, 2014 11:47 AM

July 18, 2014

Ben FowlerHow to rick-roll your friends with video brochures

My employers recently distributed a promotional video to everybody in the company in the form of video brochures, as part of their big rebranding effort. My puerile sense of humour ensured that I had to have some fun with this cheap, but quit neat piece of hardware. "Video brochures" belong to a class of devices known as s1 mp3 video players, which includes most cheap Chinese MP3 audio and video

July 18, 2014 07:13 PM

July 17, 2014

Blue HackersAdverse Childhood Exprience (ACE) questionnaire | acestoohigh.com

NOTE: the links referred to in this post may contain triggers. Make sure you have appropriate support available.

http://acestoohigh.com/got-your-ace-score/

There are 10 types of childhood trauma measured in the ACE Study, personal as well as ones related to other family members. Once you have your score, there are many useful insights later in the article.

The origin of this study was actually in an obesity clinic.

July 17, 2014 08:45 AM

Blue HackersHarvard 75 year longitudinal study released re men and happy lives

July 17, 2014 01:28 AM

July 15, 2014

Daniel DevineMildly Useful jQuery Plugins

Over the last few years I have written some mildly useful jQuery plugins. I've had them published on BitBucket but I've never really announced them. Here they are.

Read more…

July 15, 2014 06:36 AM

Ben MartinHookup wires can connect themselves...

A test of precision movement of the Lynxmotion AL5D robot arm, seeing if it could pluck a hookup wire from a whiteboard and insert it into an Arduino Uno. The result: yes it certainly can! To be able to go from Fritzing layout file to automatic real world jumper setup wires would have to be inserted in a specific ordering so that the gripper could overhang the unwired part of the header as it went along.

<iframe allowfullscreen="" frameborder="0" height="281" mozallowfullscreen="" src="http://player.vimeo.com/video/100631767" webkitallowfullscreen="" width="500"></iframe>
Lynxmotion AL5D moving a jumper to an Arduino. from Ben Martin on Vimeo.

July 15, 2014 04:22 AM

July 08, 2014

Blue Hackers7 Things to Remember When You Think You’re Not Good Enough

July 08, 2014 03:01 AM

July 06, 2014

Ben MartinEnding up Small

It's easy to get swept up in trying to build a robot that has autonomy and the proximity, environment detection and inference, and feedback mechanisms that go along with that. There's something to be said about the fun of a direct drive robot with just power control and no feedback. So Tiny Tim was born! For reference, his wheels are 4 inches in diameter.


A Uno is used with an analog joystick shield to drive Tim. He is not intended and probably never will be able to move autonomously. Direct command only. Packets are sent over a wireless link from the controller (5v) to Tim (3v3). Onboard Tim is an 8Mhz/3v3 Pro Micro which I got back on Arduino day :) The motors are driven by a 1A Dual TB6612FNG Motor Driver which is operated very much like an L298 dual hbridge (2 direction pins and a PWM). Tim's Pro Micro also talks to an OLED screen and his wireless board is put out behind the battery to try to isolate it a little from the metal. The OLED screen needed 3v3 signals, so Tim became a 3v3 logic robot.

He is missing a hub mount, so the wheel on the left is just sitting on the mini gearmotor's shaft. At the other end of the channel is a Tamiya Omni Wheel which tilts the body slightly forward. I've put the battery at the back to try to make sure he doesn't flip over during hard break.

A custom PCB would remove most of the wires on Tim and be more robust. But most of Tim is currently put together from random bits that were available. The channel and beams should have a warning letting you know what can happen once you bolt a few of them together ;)

July 06, 2014 04:27 AM

July 05, 2014

Daniel DevineEncrypted Backup to Amazon S3 (2014)

The following is a guide on how to do encrypted backups to Amazon S3. This is an updated version of my older system which used the Dt-S3-Backup script but I've decided to try a replacement which is available from the EPEL repository - duply. This guide has been updated for and tested on CentOS 6.

Read more…

July 05, 2014 05:30 AM

July 01, 2014

Ben MartinThe long road to an Autonomous Vehicle.

Some people play tennis, some try to build autonomous wheeled robots. I do the later and the result has come to be known as “Terry”. In the long road toward that goal what started as a direct drive, “give the motor a PWM of X% of the total power”, lead to having encoder feedback so that the wheel could be turned an exact amount over a given time.  This has meant that the control interface no longer uses a direct power and direction slider but has buttons to perform specific motion tasks. The button with the road icon and a 1 will move the wheels forward a single rotation to advance Terry 6π inches forwards (6 inch wheels).

The red tinted buttons use the wheel encoders to provide precision movement. Enc left and Enc right turn Terry in place (one wheel forward, one wheel backwards). The Circus and Oval perform patrol like maneuvers. Circus moves forward, turns in place, returns back, and turns around again. It's like a strafing patrol. On the other hand, the Oval turns move like a regular car, with both wheels going forward but one wheel moving slower than the other to create the turning effect. The pan and tilt control the on board camera for a good Terry eye view of the world.

I added two way web socket communications to Terry this week. So now the main battery voltage is updated and the current, err, current for each motor is shown at the bottom of the page. The two white boxes there give version information so you can see if the data is stale or not. I suspect a little async javascript on some model value will be coming so that items on the page will darken as they become stale due to lag on Terry or communications loss.

I've been researching proximity detection and will be integrating a Kinect for longer distance measures soon. Having good reliable distance measurements from Terry to objects is the first step in getting him to create maps of the environment for autonomous navigation.

July 01, 2014 01:22 PM

June 26, 2014

Ben MartinAtmel Atmega1284

I started tinkering with the Atmega1284. Among other things it gives you an expansive 16kb of SRAM, 2 uarts and of course looking at the chip a bunch more IO. A huge plus is that you can get a nice small SMD version and this 40 pin DIP monster with the same 1284. Yay for breadboard prototypers who don't oven bake each board configuration! The angle of photo seems to include the interesting bits. Just ignore the two opamps on the far right :)


I had trouble getting this to work with a ceramic resonator. The two xtal lines are right next to each other with ground just above but the 3 pins on the resonator were always a bit hard to get into the right configuration for these lines. Switching over to a real crystal and 22pF caps I got things to work. The symptoms I was having with the resonator included non reproducibility, sometimes things seemed to upload sometimes not.  Also, make sure the DTR line is going though a cap to the reset pullup resistor. See the wiring just to the right of the 1284.

I haven't adapted the Arudino makefile to work on this yet, so unfortunately I still have to upload programs using the official IDE. I have the makefile compiling for the 1284 but das blinken doesn't work when I "make upload".

June 26, 2014 12:25 AM

June 20, 2014

Ben Martin3d printed part repo for attaching to Actobotics structure

I started a github repo for 3d print source files to attach things to Actobotics structure. First up is a mount to secure a rotary encoder directly to some channel as shown below.


As noted in the readme, I found a few issues post print, so had to switch to subtractive modelling (dremel time) to complete it. I leant a few things from this though so the issues are less likely to bite me in the future. There are some comments in the scad file so hopefully it will be useful to others and maybe one day I'll get a PR to update it. The README file in the repo will probably be the one true source for update info and standing issues with each scad file.

You can just see the bolt in below the rotary encoder. That has a little inset in the 3d part to stop it from free spinning when you screw in the 6-32 nut from below the channel.

June 20, 2014 12:54 AM

June 16, 2014

Ben MartinArduino and wireless networking

The nRF24L01 module allows cheap networking for Arduinos. Other than VCC which it wants at 3v3 it uses SPI (4 wires) and a few auxiliary wires, one for an interrupt line. One of the libraries to drive these chips is the RF24. Packets can be up to 32 bytes in length, and by default attempts to send less than 32 bytes result in sending a whole 32 byte block. Attempts to send more than 32 bytes seem to result in only the first 32 being sent. There is a CRC which is a 2 byte and is on by default.

For doing some IoT things using a HMAC might be much better choice than just CRC. The major advantage being that one can tell that something coming over a wireless link was from the expected source. Of course, if somebody has physical access to the arduinos then they can clone the HMAC key, but one has to define what they want from the system. The HMAC provides a fairly good assurance that sensor data is coming from the arduino you think it is coming from rather than somebody trying to send fake data. Another rather cute benefit is that the system doesn't accept junk data attacks. If your HMAC doesn't match the packet doesn't get evaluated by the higher level software.

Using a Sha256 HMAC the return ping times go from 50 to 150ms. The doubling can be explained by the network traffic, as the HMAC is 32 bytes and will double the amount of traffic. I think perhaps the extra 50ms goes into hashing but I'll have to measure that more specifically to find out.

Once I clean up the code a little I'll probably push it to github. A new RF24HMAC class delegates to the existing RF24 class and sends a HMAC packet right after the user data packet for you. The interface is similar, I'm adding some writenum() calls which I might make more like nodejs buffer. Once you have added the user data packet call done() to send everything including the HMAC packet.

RF24 radio(9,10);
RF24HMAC radiomac( radio, "wonderful key" );
radiomac.beginWrite();
radiomac.writeu32( time );
bool ok = radiomac.done();


The receiving end boils down to getting a "packet" which is really just the 32 bytes of user data that was sent. The one readAuthenticatedPacket() call actually gets 2 packets off the wire, the user data packet and the HMAC packet. If the hmac calculated locally for the user data packet does not match the HMAC that was received from the other end then you get a null pointer back from readAuthenticatedPacket(). The data is either authenticated or you don't get to see any of it.

RF24HMAC radiomac( radio, "wonderful key" );
uint8_t* packetData = 0;
if( packetData = radiomac.readAuthenticatedPacket() )
{
     int di = 0;
     uint32_t v = readu32( packetData, di );
     got_time = v;
     printf("Got payload %lu...\n\r",got_time);
}


Oh yeah, I also found this rather cute code to output hex while sniffing around.

June 16, 2014 02:06 AM

June 09, 2014

Blue HackersWorkplace Bullying

<iframe allowfullscreen="allowfullscreen" frameborder="0" height="281" src="https://www.youtube.com/embed/wAgg32weT80?feature=oembed" width="500"></iframe>

June 09, 2014 10:20 AM

June 03, 2014

Adrian SuttonSwift

Apple have released Swift, their new programming language – designed to be familiar to Objective-C programmers and work well with the existing Cocoa frameworks. It’s far too soon to make substantial judgements about the language – that can only come after actually using it in real projects for some time. However, there’s nothing that stands out as incredibly broken, so with Apple’s backing it’s extremely unlikely that it won’t become a very commonly used language. After all, there’s plenty wrong with every other programming language and we manage to make do with them.

What I find most promising about it though is that many language design choices are justified by them preventing common causes of bugs in Objective-C or C (and many other languages).  For example:

“The cases of a switch statement do not “fall through” to the next case in Swift, avoiding common C errors caused by missing break statements.”

Excerpt From: Apple Inc. “The Swift Programming Language.” iBooks

They’ve replaced many common checkstyle/lint errors with better language design that the mistakes impossible (what a good idea). It could be argued that they could have taken more extreme approaches to find solutions or prevented more sources of errors with additional cleverness but my initial take is that it has likely found a good balance between fixing common causes of bugs while still being familiar to Objective-C coders (it’s target audience) and working well with the existing frameworks and libraries.

We’ll likely find plenty to complain about, as always, but overall I suspect it will be a very nice language to work with.

June 03, 2014 08:49 AM

Ben MartinTerryTorial: Wheel Feedback, Object detection, Larger size, and now a screen!

Terry the robot now has grown up a little, both semantically and physically. The 12 inch channel that was the link to the rear swivel wheel now has many friends which are up to 15 inches higher than the base beam. This places Terry's main pan/tilt camera at his top a little bit above table height, so he is far more of a presence than he used to be. On the upside, the wheel controller, shaft encoders, and batteries can all live on the lower level and there is more room for the actual core of Terry up top. I'm likely to add one or two more shelves to the middle of Terry for more controllers etc.


There is now also a 16x32 RGB matrix screen up front so Terry can tell you what he is thinking, what speed his wheels are rotating at, and if there is an obstacle that has been detected.

I'm rendering the framebuffer for the screen on the BeagleBone Black using Cairo and Pango (thus & freetype). The image data is extracted from Cairo as 32bit RGBA and packed down into planar data that the arduino expects. That packed framebuffer is then sent over UART to an Arduino 328 which takes care of refreshing the RGB matrix so that fake colour levels are achieved using a software PWM/BAM implementation. Yay for the BBB having 4.5 UARTs. I need to work out how to bring up the TX only UART as that is perfect for the one way communication used to drive the screen.

I should be able to get better colour precision using a Teensy 3.x as the screen driver. The Cortex-M4 just has more cycles to be able to softpwm the screen faster to be able to get a greater perceived colour range.

I customized the font a little in FontForge. Specifically I modified the kerning for "Te" to not waste as much space. The font is based on Cantarell by Dave Crossland. I've only just started on this open font to LED matrix stuff, but with some form of PWM and FreeType rendering the data I hope to be able to get nice antialiased font renders, even at the low resolution of 16x32.

The wheel encoders make a huge difference to the whole experience. Even without full autonomy the encoders allow you to have a repeatable oval or patrol path which can be followed. Also being able to setup the speed to avoid large acceleration or jerky movement so that the higher Terry is still a very stable Terry when on the move.

June 03, 2014 04:50 AM

May 27, 2014

Adrian SuttonStatic and Dynamic Languages

Did you know that it’s perfectly fine to enjoy programming in both static and dynamic languages?

— Honza Pokorny (@_honza) May 26, 2014

I do a lot of coding in Java and JavaScript and it’s never bothered me that one has static types and one dynamic (and I’ve used plenty of other languages from both camps as well – it amuses me slightly that the two I use most often tend to be viewed as the worst examples of each type). I can see pros and cons of both approaches – static typing detects a bunch of common errors sooner which saves time but there are inevitably times where you wind up just fighting the type system which wastes time again. Better statically typed languages waste less of your time but at some point they all cause pain that’s avoided with dynamic languages. It winds up coming down largely to personal preference and is a little affected by what you’re building.

What I have learnt however is that using static and dynamic languages too closely is a recipe for pain and frustration. LMAX develops the main exchange in Java and recently started using Spock for unit tests. Spock is amazing in many, many ways but it comes with the Groovy programming language which is dynamically typed. There isn’t, or perhaps shouldn’t be, any tighter coupling than a class and its unit test so the combination of static and dynamic typing in this case is extremely frustrating.

It turns out that the way I work with languages differs depending on the type system – or rather on the tools available but they are largely affected by the type system. In Java I rename methods without a second thought, safe in the knowledge that the IDE will find all references and rename them. Similarly I can add a parameter to an API and then let the compiler find all the places I need to update. Once there are unit tests in Groovy however neither of those options is completely safe (and using the compiler to find errors is a complete non-starter).

When writing JavaScript I’m not under any illusion that the IDE will find every possible reference for me so I approach the problem differently. I explore using searches rather than compiler errors and turn to running tests to identify problems more quickly because the compiler isn’t there to help. I also tend to write tests different, expecting that they will need to cover things that the compiler would otherwise have picked up. The system design also changes subtly to make this all work better.

With static and dynamic typing too closely mixed, the expectations and approaches to development become muddled and it winds up being a worst-of-both-worlds approach. I can’t count on the compiler helping me out anymore but I still have to spend time making it happy.

That doesn’t mean that static and dynamic languages can’t co-exist on the same project successfully, just that they need a clearly defined, and relatively stable, interface between them. The most common example being static typing on the server and JavaScript in the browser, in which case the API acts as a buffer between the two. It could just as easily be server to server communication or a defined module API between the two though and still work.

May 27, 2014 04:50 AM

May 26, 2014

Ben MartinTo beep or not to beep.

There comes a time in the life of any semi autonomous robot when the study of the classics occurs. For Terry, the BeagleBone Black driven Actobotics construction, short of visiting the Globe the answer would seem to be at current "To Beep".



Terry is now run by a RoboClaw 2x5A controller board with two 1024 pulse per revolution shaft encoders attached to an 8:1 ratio gearing on the wheel. This gives an overall of about 13 bits of precision from the shaft encoders relative to the wheel. In the future I can use this magnificent 4 inch alloy gear wheel as a torque multiplier by moving the motor to its own tooth pinion gear. Terry now also has an IR distance sensor out front, and given the 512mb of RAM on the BeagleBone some fairly interesting mapping can be built up by just driving around and storing readings. Distance + accurate shaft encoders for the win?

To use the RoboClaw from the BeagleBone Black I created a little nodejs class. It's not fully complete yet, but it is already somewhat useful. I found creating the nodejs interesting because when coding for MCUs one gets quite used to the low level style of sending bytes and stalling until the reply comes in. For nodejs such a style doesn't implement well. Normally one would see something like

doMagic( 7, rabbits, function() {
    console.log("got them rabbitses, we did");
});

Where the callback function is performed when the task of pulling the number of rabbits from the hat is complete. For using the RoboClaw you might have something like

claw.getVersion( function( v ) {
  console.log("version:" + v.value );
});
claw.setQPPS( 1024 );

Which all seems fairly innocent. The call to getVersion will send the VERSION command to the roboclaw with a given address (you can have many claws on the same bus). The claw should then give back a version string and a checksum byte. The checksum can be stripped off and verified by the nodejs.

The trouble is that the second call, to set of the quadratic decoder pulses per second, will start to happen before the RoboClaw got a chance to service the getVersion call. Trying to write a second command before you have the full reply from the first is a recipe for disaster. So the nodejs RoboClaw class maintains a queue of requests. The setQPPS() is not run right away, but enqueued for later. Once the bytes come back over the UART in response to the getVersion() then the callback is run and the RoboClaw nodejs class then picks the next command (setQPPS) and sends that to the RoboClaw. This way you get the strict ordered serial IO that the RoboClaw hardware is needing, but the nodejs can execute many commands without any stalling. Each command is handled in the order it is written in the nodejs source code, it just doesn't execute right away.

Next up is trying to tune the PID controller variables for the robot. One might expect a command such as "move 1 wheel circumference forward" to work fairly exactly if the quality of the shaft encoders is good. Though for the wheel size and QPPS I get a little bit of overshoot currently. I suspect the current D is not sufficient to pull it up in time. It may be a non issue when the wheels are down and friction helps the slow down process.

Terry now also has an IMU onboard. I created a small class to read from the TWI HMC5883 magneto. You'll have to figure your declination and I have a small hack on about line 100 to alter the heading so that a 0 degree reading means that physically Terry is facing North. I should bring that parameter out of the class so that its easier to adjust for other robots that might mount their IMU in a different physically orientation than what I happened to use.

May 26, 2014 08:07 AM

Adrian SuttonAutomated Tests Are a Code Smell

Writing automated tests to prove software works correctly is now well established and relying solely or even primarily on manual testing is considered a “very bad sign”. A comprehensive automated test suite gives us a great deal of confidence that if we break something we’ll find out before it hits production.

Despite that, automated tests shouldn’t be our first line of defence against things going wrong. Sure they’re powerful, but all they can do is point out that something is broken, they can’t do anything to prevent it being broken in the first place.

So when we write tests, we should be asking ourselves, can I prevent this problem from happening in the first place? Is there a different design which makes it impossible for this problem to happen?

For example, checkstyle has support for import control, allowing you to write assertions about what different packages can depend on. So package A can use package B but package C can’t. If you’re concerned about package structure it makes a fair bit of sense. Except that it’s a form of testing and the feedback comes late in the cycle. Much better would be to split the code into separate source trees so that the restrictions are made explicit to the compiler and IDE. That way autocomplete won’t offer suggestions from forbidden packages and the code won’t compile if you use them. It is therefore much harder to do the wrong thing and feedback comes much sooner.

Each time we write an automated test, we’re admitting that there is a reasonable likelihood that someone could mistakenly break it. In the majority of cases an automated test is the best we can do, but we should be on the look out for opportunities replace automated tests with algorithms, designs or tools that eliminate those mistakes in the first place or at least identify them earlier.

 

May 26, 2014 03:42 AM

May 25, 2014

Adrian SuttonFinding Balance with the SOLID Principles

Kevlin Henney’s talk from YOW is a great deconstruction of the SOLID principles, delving into what they really say as opposed to what people think they say, and what we can learn from them in a nuanced, balanced way.

Far too often we take these rules as absolutes under the mistaken assumption that if we always follow them, our software will be more maintainable. As an industry we seem to be seeking a silver bullet for architecture that solves all our problems just like we’ve searched for a silver bullet language for so long. There is value in these principles and we should take the time to learn from them, but they are just tools in our toolkit that we need to learn not only how to use, but when.

May 25, 2014 10:09 PM

May 24, 2014

Adrian SuttonDon’t Get Around Much Anymore

Human Resource Executive Online has an interesting story related to my previous entry:

Ask yourself this question: If you were offered a new job in another city where you have no ties or networks, and you suspected that the job would probably not last more than three years (which is a good guess), how much of a raise would they have to give you to get you to move? Remember, you’d likely be trying to find a new job in a place where you don’t have any connections when that job ended. I suspect it would be a lot more than most employers are willing to pay.

Again though this should be driving up the number of remote jobs, but where are they all?

May 24, 2014 08:14 AM

May 13, 2014

Blue HackersKids Matter – mental health for schoolkids

The KidsMatter site is funded by the Australian Commonwealth Department of Health. There is a specific section for primary schools, KidsMatter Primary.

Seems like an excellent initiative that’s been going for some years already, but not every school will be involved with it yet (cost shouldn’t be a hindrance, it’s free).

So please take a look and mention it to your contacts (principal, P&C, teacher) at your kids’ school!

May 13, 2014 10:34 AM

May 10, 2014

Ben MartinGoing from Arduino TWI to BeagleBone Black Bonescript

It's handy for a robot to know what orientation it is holding, and for that a Magnetometer can give you where north is at relative to the chip orientation. For this I used the HMC5883. I started out with a cut down minimal version of code running on Arduino and then started porting that over to Bonescript for the BeagleBone Black to execute.

The HMC5883 only needs power (3v3), ground, and the two wires of a TWI.

One of the larger API changes of the port is the Arduino line:

Wire.requestFrom( HMC5883_ADDRESS_MAG, 6 );

Which takes the TWI address of the chip you want 6 bytes from. It has to be prefixed by another TWI transmission to set the read byte address to start from where you would like, in this case HMC5883_REGISTER_MAG_OUT_X_H_M = 0x03.

On the Bonescript side this becomes the following. The TWI address isn't used because the wire object already knows that, but you pass the register address to start reading from directly to the request to read bytes:


self.wire.readBytes( HMC5883_REGISTER_MAG_OUT_X_H_M,  
BUFFER_SIZE, 
function(err, res) { ... });

I pushed the code up to github bbbMagnetometerHMC5883. Note that getting "north" requires both a proper declination angle for your location and also some modification to allow for how your magneto chip is physically oriented on your robot. Actual use of the chip is really easy with the async style of nodejs.

May 10, 2014 11:52 PM

April 29, 2014

Ben MartinActobotics and ServoCity, addictive Robot fun!

I have had the great fortune to play around with some of the Actobotics components. The main downside I've seen so far is that its highly addictive!


What started out as a 3 wheel robot base then obtained a Web interface and an on board access point to control the speed and steering of the beast. The access point runs openWRT (now) and so with the Beagle there are two Linux machines on board!

The pan and tilt is also controlled via the Web interface and SVGA video streams over the access point from the top mounted web camera. There is a Lithium battery mounted in channel below the BeagleBone Black which powers it, the camera, and the TPLink access point. That block of 8 AA rechargables is currently tasked with turning the motors and servo.

It's early days yet for the robot. With a gyro I can make the pan gearmotor software limited so it doesn't wrap the cables around by turning multiple revolutions. Some feedback from the wheel gearmotors, location awareness and ultrasound etc will start to make commands like "go to the kitchen" more of a possibility.

April 29, 2014 12:52 PM

April 21, 2014

Clinton RoyPyCon Australia Call for Proposals closes Friday 25th April

There’s less than a week left to get your proposal in for PyCon Australia 2014, Australia’s national Python Conference. We focus on first time speakers so please get in touch if you have any questions. The full details are available at http://2014.pycon-au.org/cfp

 

 

 


Filed under: Uncategorized Tagged: pyconau

April 21, 2014 09:21 AM

April 17, 2014

Ben FowlerBad HDMI cables and CEC

Fig: nasty HDMI cable from Maplin that apparently doesn't support CEC I got myself a Raspberry Pi (a tiny, £30 hacker-friendly computer), and then set it up with RaspXBMC to stream my movie rips off the house file server to my Samsung 6-Series TV. Now, you're supposed to be able to use the TV remote to drive XBMC and it didn't work initially. No matter how hard I tried, I simply could not

April 17, 2014 09:26 PM

April 06, 2014

Blue HackersShift workers beware: Sleep loss may cause brain damage, new research says

April 06, 2014 08:48 AM

April 01, 2014

Blue HackersStudents and Mental Health at University

The Guardian is collecting experiences from students regarding mental health at university. I must have missed this item earlier as there are only a few days left now to get your contribution in. Please take a look and put in your thoughts!

It’s always excellent to see mental health discussed. It helps us and society as a whole.

April 01, 2014 11:34 PM

March 30, 2014

Anthony TownsBitcoincerns

Bitcoincerns — as in Bitcoin concerns! Get it? Hahaha.

Despite having an interest in ecash, I haven’t invested in any bitcoins. I haven’t thought about it any depth, but my intuition says I don’t really trust it. I’m not really sure why, so I thought I’d write about it to see if I could come up with some answers.

The first thing about bitcoin that bothered me when I first heard about it was the concept of burning CPU cycles for cash — ie, setup a bitcoin miner, get bitcoins, …, profit. The idea of making money by running calculations that don’t provide any benefit to anyone is actually kind of offensive IMO. That’s one of the reasons I didn’t like Microsoft’s Hashcash back in the day. I think that’s not actually correct, though, and that the calculations being run by miners are actually useful in that they ensure the validity of bitcoin transfers.

I’m not particularly bothered by the deflationary expectations people have of bitcoin. The “wild success” cases I’ve seen for bitcoin estimate their value by handy wavy arguments where you take a crazy big number, divide it by the 20M max bitcoins that are available, and end up with a crazy big number per bitcoin. Here’s the argument I’d make: someday many transactions will take place purely online using bitcoin, let’s say 75% of all transactions in the world by value. Gross World Product (GDP globally) is $40T, so 75% of that is $30T per year. With bitcoin, each coin can participate in a transaction every ten minutes, so that’s up to about 52,000 transactions a year, and there are up to 20M bitcoins. So if each bitcoin is active 100% of the time, you’d end up with a GWP of 1.04T bitcoins per year, and an exchange rate of $28 per bitcoin, growing with world GDP. If, despite accounting for 75% of all transactions, each bitcoin is only active once an hour, multiply that figure by six for $168 per bitcoin.

That assumes bitcoins are used entirely as a medium of exchange, rather than hoarded as a store of value. If bitcoins got so expensive that they can only just represent a single Vietnamese Dong, then 21,107 “satoshi” would be worth $1 USD, and a single bitcoin would be worth $4737 USD. You’d then only need 739k bitcoins each participating in a transaction once an hour to take care of 75% of the world’s transactions, with the remaining 19M bitcoins acting as a value store worth about $91B. In the grand scheme of things, that’s not really very much money. I think if you made bitcoins much more expensive than that you’d start cutting into the proportion of the world’s transactions that you can actually account for, which would start forcing you to use other cryptocurrencies for microtransactions, eg.

Ultimately, I think you’d start hitting practical limitations trying to put 75% of the world’s transactions through a single ledger (ie hitting bandwidth, storage and processing constraints), and for bitcoin, that would mean having alternate ledgers which is equivalent to alternate currencies. That would involve some tradeoffs — for bitcoin-like cryptocurrencies you’d have to account for how volatile alternative currencies are, and how amenable the blockchains are to compromise, but, provided there are trusted online exchanges to convert one cryptocurrency into another, that’s probably about it. Alternate cryptocurrencies place additional constraints on the maximum value of bitcoin itself, by reducing the maximum amount of GWP happening in bitcoin versus other currencies.

It’s not clear to me how much value bitcoin has as a value store. Compared to precious metals, is much easier to transport, much easier to access, much less expensive to store and secure. On the other hand, it’s much easier to destroy or steal. It’s currently also very volatile. As a store of value, the only things that would make it better or worse than an alternative cryptocurrency are (a) how volatile it is, (b) how easy it is to exchange for other goods (liquidity), and (c) how secure the blockchain/algorithms/etc are. Of those, volatility seems like the biggest sticking point. I don’t think it’s unrealistic to imagine wanting to store, say, $1T in cryptocurrency (rather than gold bullion, say), but with only 20M bitcoins, that would mean each bitcoin was worth at least $50,000. Given a current price of about $500, that’s a long way away — and since there are a lot of things that could happen in the meantime, I think high volatility at present is a pretty plausible outcome.

I’m not sure if it’s possible or not, but I have to wonder if a bitcoin based cryptocurrency designed to be resistant to volatility would be implementable. I’m thinking (a) a funded exchange guaranteeing a minimum exchange rate for the currency, and (b) a maximum number of coins and coin generation rate for miners that makes that exchange plausible. The exchange for, let’s call it “bitbullion”, should self-fund to some extent by selling new bitbullion at a price of 10% above guidance, and buying at a price of 10% below guidance (and adjusting guidance up or down slightly any time it buys or sells, purely in order to stay solvent).

I don’t know what the crypto underlying the bitcoin blockchain actually is. I’m surprised it’s held up long enough to get to where bitcoin already is, frankly. There’s nominally $6B worth of bitcoins out there, so it would seem like you could make a reasonable profit if you could hack the algorithm. If there were hundreds of billions or trillions of dollars worth of value stored in cryptocurrency, that would be an even greater risk: being able to steal $1B would tempt a lot of people, being able to destroy $100B, especially if you could pick your target, would tempt a bunch more.

So in any event, the economic/deflation concerns seem assailable to me. The volatility not so much, but I’m not looking to replace my bank at the moment, so that doesn’t bother me either.

I’m very skeptical about the origins of bitcoin. The fact it’s the first successful cryptocurrency, and also the first definitively non-anonymous one is pretty intriguing in my book. Previous cryptocurrencies like Chaum’s ecash focussed on allowing Alice to pay Bob $1 without there being a record of anything other than Alice is $1 poorer, and Bob is $1 richer. Bitcoin does exactly the opposite, providing nothing more than a globally verifiable record of who paid whom how much at what time. That seems like a dream come true for law enforcement — you don’t even have to get a warrant to review the transactions for an account, because everyone’s accounts are already completely public. Of course, you still have to find some way to associate a bitcoin wallet id with an actual person, but I suspect that’s a challenge with any possible cryptocurrency. I’m not quite sure what the status of the digicash/ecash patents are/were, but they were due to expire sometime around now (give or take a few years), I think.

The second thing that strikes me as odd about bitcoin is how easily it’s avoided being regulated to death. I had expected the SEC to decide that bitcoins are a commodity with no real difference to a share certificate, and that as a consequence they can only be traded using regulated exchanges by financial professionals, or similar. Even if bitcoins still count as new enough to only have gotten a knee-jerk regulatory response rather than a considered one (with at $500 a pop and significant mainstream media coverage, I doubt), I would have expected something more along the lines of “bitcoin trading is likely to come under regulation XYZ, operating or using an unregulated exchange is likely to be a crime, contact a lawyer” rather than “we’re looking into it”. That makes it seem like bitcoin has influential friends who aren’t being very vocal in public, and conspiracy theories involving NSA and CIA/FBI folks suggesting leaving bitcoin alone for now might help fight crime, seem more plausible than ones involving Gates or Soros or someone secretly creating a new financial world order.

The other aspect is that it seems like there’s only really four plausible creators of bitcoin: one or more super smart academic types, a private startup of some sort, an intelligence agency, or a criminal outfit. It seems unlikely to me that a criminal outfit would create a cryptocurrency with a strong audit trail, but I guess you never know. It seems massively unlikely that a legitimate private company would still be secret, rather than cashing out. Likewise it seems unlikely that people who’d just done it because it seemed like an interesting idea would manage to remain anonymous still; though that said, cryptogeeks are weird like that.

If it was created by an intelligence agency, then its life to date makes some sense: advertise it as anonymous online cash that’s great for illegal stuff like buying drugs and can’t be tracked, sucker in a bunch of criminals to using it, then catch them, confiscate the money, and follow the audit trail to catch more folks. If that’s only worked for silk road folks, that’s probably pretty small-time. If bitcoin was successfully marketed as “anonymous, secure cryptocurrency” to organised crime or terrorists, and that gave you another angle to attack some of those networks, you could be on to something. It doesn’t seem like it would be difficult to either break into MtGox and other trading sites to gain an initial mapping between bitcoins and real identities, or to analyse the blockchain comprehensively enough to see through most attempts at bitcoin laundering.

Not that I actually have a problem with any of that. And honestly, if secret government agencies lean on other secret government agencies in order to create an effective and efficient online currency to fight crime, that’s probable a win-win as far as I’m concerned. One concern I guess I have though, is that if you assume a bunch of law-enforcement cryptonerds build bitcoin, is that they might also have a way of “turning it off” — perhaps a real compromise in the crypto that means they can easily create forks of the blockchain and make bitcoins useless, or just enough processor power that they can break it by bruteforce, or even just some partial results in how to break bitcoin that would destroy confidence in it, and destroy the value of any bitcoins. It’d be fairly risky to know of such a flaw, and trust that it wouldn’t be uncovered by the public crypto research community, though.

All that said, if you ignore the criminal and megalomaniacal ideas for bitcoin, and assume the crypto’s sound, it’s pretty interesting. At the moment, a satoshi is worth 5/10,000ths of a cent, which would be awesome for microtransactions if the transaction fee wasn’t at 5c. Hmm, looks like dogecoin probably has the right settings for microtransactions to work. Maybe I should have another go at the pay-per-byte wireless capping I was thinking of that one time… Apart from microtransactions, some of the conditional/multiparty transaction possibilities are probably pretty interesting too.

March 30, 2014 01:00 PM

March 25, 2014

Blue HackersRude vs. Mean vs. Bullying: Defining the Differences

March 25, 2014 01:16 AM

March 22, 2014

Anthony TownsBeanBag — Easy access to REST APIs in Python

I’ve been doing a bit of playing around with REST APIs lately, both at work and for my own amusement. One of the things that was frustrating me a bit was that actually accessing the APIs was pretty baroque — you’d have to construct urls manually with string operations, manually encode any URL parameters or POST data, then pass that to a requests call with params to specify auth and SSL validation options and possibly cookies, and then parse whatever response you get to work out if there’s an error and to get at any data. Not a great look, especially compared to XML-RPC support in python, which is what REST APIs are meant to obsolete. Compare, eg:

server = xmlrpclib.Server("http://foo/XML-RPC")
print server.some.function(1,2,3,{"foo": "bar"})

with:

base_url = "https://api.github.com/"
resp = requests.get(base_url + "/repos/django/django")
if resp.ok:
    res = resp.json()
else:
    raise Exception(r.json())

That’s not to say the python was is bad or anything — it’s certainly easier than trying to do it in shell, or with urllib2 or whatever. But I like using python because it makes the difference between pseudocode and real code small, and in this case, the xmlrpc approach is much closer to the pseudocode I’d write than the requests code.

So I had a look around to see if there were any nice libraries to make REST API access easy from the client side. Ended up getting kind of distracted by reading through various arguments that the sorts of things generally called REST APIs aren’t actually “REST” at all according to the original definition of the term, which was to describe the architecture of the web as a whole. One article that gives a reasonably brief overview is this take on REST maturity levels. Otherwise doing a search for the ridiculous acronym “HATEOAS” probably works. I did some stream-of-consciousness posts on Google-Plus as well, see here, here and here.

The end result was I wrote something myself, which I called beanbag. I even got to do it mostly on work time and release it under the GPL. I think it’s pretty cool:

github = beanbag.BeanBag("https://api.github.com")
x = github.repos.django.django()
print x["name"]

As per the README in the source, you can throw in a session object to do various sorts of authentication, including Kerberos and OAuth 1.0a. I’ve tried it with github, twitter, and xero’s public APIs with decent success. It also seems to work with Magento and some of Red Hat’s internal tools without any hassle.

March 22, 2014 08:47 AM

March 20, 2014

Blue HackersOn inclusiveness – diversity

Ashe Dryden writes about Dissent Unheard Of – noting “Perhaps the scariest part of speaking out is seeing the subtle insinuation of consequence and veiled threats by those you speak against.”

From my reading of what goes on, much of it is not even very subtle, or veiled. Death and rape threats. Just astonishingly foul. This is not how human beings should treat each other, ever. I have the greatest respect for Ashe, and her courage in speaking out and not being deterred. Rock on Ashe!

The reason I write about it here on BlueHackers.org is that I think there is a fair overlap between issues of harassment and other nasties and depression, and it will affect individuals, companies, conferences and other organisations alike. So it’s important to call it out and actually deal with it, not regard it as someone else’s problem.

Our social and work place environments need to be inclusive for all, it’s as simple as that. Inclusiveness is key to achieving diversity, and diverse environments are the most creative and innovative. If a group is not diverse, you’re missing out in many ways, personally as well as commercially.

Please read Ashe’s thoughtful analysis of what causes people and places to often not be inclusive – such understanding is a good step towards effecting change. Is there something you can do personally to improve matters in an organisation? Please tell about your thoughts, actions and experiences. Let’s talk about this!

March 20, 2014 05:37 AM

March 02, 2014

Ben MartinThe vs1063 untethered

Late last year I started playing with the vs1063 DSP chip, making a little Audio Player. Quite a lot of tinkering is available for in/output on such a project, so it can be an interesting way to play around with various electronics stuff without it feeling like 10 das blinken lights tutorials in a row.

Over this time I tried using a 5 way joystick for input as well as some more unconventional mechanisms. The Blackberry Trackballer from SparkFun is my favourite primary input device for the player. Most of the right hand side board is to make using that trackballer simpler, boiling it down to a nice I2C device without the need for tight timing or 4 interrupt lines on the Arduino.




The battery in the middle area of the screen is the single 16850 protected battery cell that runs the whole show. The battery leads via a switch to a 3.7v -> 5v step up to provide a clean power input. Mid way down the right middle board is a low dropout 3v3 regulator from which the bulk of the board is running. The SFE OLED character display wants its 5v, and the vs1063 breakout board, for whatever reason, wants to regulate its own 3v3 from a 5v input. Those are the two 5v holdouts on the board.

Last blog post I was still using a Uno as the primary microcontroller. I also got rid of that Uno and moved to an on board 328 running at 8Mhz and 3v3. Another interesting leaning experience was finding when something 'felt' like it needed a cap. At times the humble cap combo is a great way to get things going again after changing a little part of the board. After the last clean up it should all fit onto 3 boards again, so might then also fit back into the red box. Feels a bit sad not to have broken the red box threshold anymore :|

Loosely the board comes out somewhat like the below... there are things missing from the fritzing layout, but its a good general impression. The trackballer only needs power, gnd, I2C, and MR pin connections. With reasonable use of SMD components the whole thing should shrivel down to the size of the OLED PCB.

Without tuning the code to sleep the 328 or turn off the attin84 + OLED screen (oled is only set all black), it uses about 65mA while playing. I'm running the attiny84 and OLED from an output pin on the 4050 level shifter, so I can drop power to them completely if I want. I can expect above 40hrs continuous play without doing any power optimization. So its all going to improve from there ;)

March 02, 2014 01:33 PM

February 18, 2014

Daniel DevineSnorkelling at Perth

I was in Perth for linux.conf.au 2014 and for a place that I've heard sucks, I'm mostly seeing the opposite.

/blog_images/perth_1b.jpg

The view from near Mudurup Rocks.

Like Dirk Hohndel's talk, this post is not so much about technology as it is about the sea. I meant to write this post before I had even left Perth but I ended up getting sidetracked by LCA and general horseplay.

After Kate Chapman's enjoyable keynote I decided to blow that popsicle stand and snorkel at Cottesloe Reef (pdf) which is a FHPA (Fish Habitat Protection Area) just 30 minutes from the conference venue by bus.

Read more…

February 18, 2014 08:30 AM

February 04, 2014

Ben Martinattiny screen driver for parallel controlled character displays

This is the "details" post for controlling a 7 bit parallel OLED character display using an attiny as a display driver. See the previous post for an overview of the setup and video.

And now... the code. Apologies for some of the names of directories. Instead of branching and whatnot in git I just copied the code to library(n+1) and added the next feature in line as I went. I might at some stage do a write up detailing the stepping stones to get from the bare attiny84 to the code below.

To make it simpler, I'll call the "main" arduino the Uno. That is the one what wants to run the show. The screen is attached to the attiny84 which I'll just call the tiny84.

This is the code that one uses on the Uno: oled_clientspireal.inoIt is designed to look very close to the example it was based on (linked in the second line of its header comment).

The attiny84 runs this attiny_oled_server_spi2.inoThe heavy lifting is done in SimpleMessagePassing5 which I link next. The main part of loop() is to process each full message that we get from SimpleMessagePassing5. There is a timeout() logic in there after that which allows the tiny84 to somewhat turn off the screen after a period of no activity on the SPI bus. noDisplay() doesn't turn off the OLED internal power, so you are only down to about 6mA there.
The SimpleMessagePassing5 does two main things (coupling SPI into message byte boundaries in a single file, the academics would be unhappy) SimpleMessagePassing5.cppEarlier version of SMP did
USICR |= (1 << USIOIE); 
in init(). That line turns on the ISR for USI SPI overflow. But that is itself now done in a pin change ISR so that the USI is only active when the attiny84 has been chip selected.

serviceInput() is a fairly basic state machine, mopping up bytes until we have a whole "message" from the SPI master. If there is a complete message available then takeMessage() will return true. It is then up to the caller to do smp.buffer().get() to actually grab and disregard the bytes that comprise a message.

I'm guessing that at some stage I'd add a "Message" class that takeMessage could then return. The trick will be to do that without needing to copy from smp.buffer() so that sram is kept fairly low.
The Uno uses the shim class Shim_CharacterOLEDSPI2.cppwhich just passes the commands along to the attiny84 for execution.
This all seems to smell a bit like it wants to use something like Google protocol buffers for marshaling but the hand crafted code works :) In some ways using GPB for such a simple interface as CharacterOLEDSPI2 might be well be considered over engineering, but I'm still tempted.

February 04, 2014 12:22 PM

January 30, 2014

Blue HackersA novel look at how stories may change the brain

January 30, 2014 12:32 PM

January 29, 2014

Ben MartinRipple counting trackballer hall effects

Sparkfun sells a breakout with a blackberry trackballer on it and 4 little hall effect sensors. One complete ball rotation generates 11 hall events. So the up, down, left, and right pins will pulse 11 times a rotation. The original thought was to hook those up to DPins on the arduino and use an interrupt that covered that block of pins to count these hall events as they came in. Then in loop() one could know how many events had happened since the last check and work from there. Feeling like doing it more in hardware though, I turned to the 74HC393 which has two 4bit ripple counters in it. Since there were four lines I wanted to count I needed two 393 chips. The output (the count) from the 393 is offered in four lines per count (its a 4 bit counter). So I then took those outputs and fed them into the MCP23x17 pin muxer which has 16 in/out pins on it. I used the I2C version of the MCP chip in this case. So it then boils down to reading the chip when you like and pulsing the MR pin on the 393s to reset the counters.


In the example sketch I pushed to github, I have a small list of choices that you can scroll through and if you stop scrolling for "a while" then it selects the current entry. Which just happens to be the exact use case I am planning to put this hardware to next. Apart from feel good factor, this design should have less chance of missing events if you already have interrupt handlers which themselves might take a while to execute.

January 29, 2014 12:50 PM

January 03, 2014

Blue HackersBlueHackers @ linux.conf.au 2014 Perth

BlueHackers.org is an informal initiative with a focus on the wellbeing of geeks who are dealing with depression, anxiety and related matters.

This year we’re more organised than ever with a number of goodies, events and services!

- BlueHackers stickers

- BlueHackers BoF (Tuesday)

- BlueHackers funded psychologist on Thursday and Friday

- extra resources and friendly people to chat with at the conference

Details below…

This year, we’ll have a professional psychologist, Alyssa Garrett (a Perth local) funded by BlueHackers, LCA2014 and Linux Australia. Alyssa will be available Thursday and part of Friday, we’ll allocate her time in half-hour slots using a simple (paper-based) anonymous booking system. Ask a question, tell a story, take time out, find out what psychology is about… particularly if you’re wondering whether you could use some professional help, or you’ve been procrastinating taking that step, this is your chance. It won’t cost you a thing, and it’s absolutely confidential. We just offer this service because we know how important it is! There will be about 15 slots available.

You can meet Alyssa on Tuesday afternoon already, at the BoF. Just to say hi!

The booking sheet will be at the BoF and from Wednesday near the rego desk.

The BlueHackers BoF is on Tuesday afternoon, 5:40pm – 6:40pm (just before the speakers dinner). Check the BoF schedule closer to the time to see which room we’re in. The format will be similar to last year: short lightning talks of people who are happy to talk – either from their own experience, as a support, or professional. No therapy, just sharing some ideas and experience. You can prep slides if you want, but not required. Anything discussed during the BoF stays there – people have to feel comfortable.

We may have some additional paper resources available during the conference, and a friendly face for an informal chat for part of the week.

Every conference bag will have a couple of BlueHackers stickers to put on your laptop and show your quiet support for the cause – letting others know they’re not alone is a great help.

If you have any logistical or other questions, just catch me (Arjen) at the conference or write to:  l i f e (at) b l u e h a c k e r s (dot) o r g

January 03, 2014 08:08 AM

January 02, 2014

Blue HackersHow emotions are mapped in the body

January 02, 2014 10:54 PM


Last updated: November 23, 2014 07:30 PM. Contact Humbug Admin with problems.