HUMBUG logo

Planet HUMBUG











HUMBUGers

Feed: rss rdf opml

January 14, 2014

Adrian SuttonHypercritical: The Road to Geekdom

Geekdom is not a club; it’s a destination, open to anyone who wants to put in the time and effort to travel there…

…dive headfirst into the things that interest you. Soak up every experience. Lose yourself in the pursuit of knowledge. When you finally come up for air, you’ll find that the long road to geekdom no longer stretches out before you. No one can deny you entry. You’re already home.

via Hypercritical: The Road to Geekdom.

') ?>

January 14, 2014 10:17 AM

"; } ?>

Adrian SuttonMyth Busting: CSS Animations vs. JavaScript

 

Myth Busting: CSS Animations vs. JavaScript:

As someone who’s fascinated (bordering on obsessed, actually) with animation and performance, I eagerly jumped on the CSS bandwagon. I didn’t get far, though, before I started uncovering a bunch of major problems that nobody was talking about. I was shocked.

This article is meant to raise awareness about some of the more significant shortcomings of CSS-based animation so that you can avoid the headaches I encountered, and make a more informed decision about when to use JS and when to use CSS for animation.

Some really good detail on performance of animations in browsers. I hadn’t heard of GSAP previously but it looks like a good option for doing animations, especially if you need something beyond simple transitions.

') ?>

January 14, 2014 04:20 AM

"; } ?>

Adrian SuttonMyth busting mythbusted

Christian Heilmann – Myth busting mythbusted:
Jack is doing a great job arguing his point that CSS animations are not always better than JavaScript animations. The issue is that all this does is debunking a blanket statement that was flawed from the very beginning and distilled down to a sound bite. An argument like “CSS animations are better than JavaScript animations for performance” is not a technical argument. It is damage control. You need to know a lot to make a JavaScript animation perform well, and you can do a lot of damage. If you use a CSS animation the only source of error is the browser. Thus, you prevent a lot of people writing even more badly optimised code for the web.
Some good points that provide balance and perspective to the way web standards evolve and how to approach web development.') ?>

January 14, 2014 04:19 AM

"; } ?>

January 13, 2014

Adrian SuttonWifi Under Fedora Linux on a MacBook Pro 15″ Retina

Out of the box, Fedora 19 doesn’t have support for the broadcom wifi chip in the MacBook Pro 15″ Retina. There are quite a few complex instructions for adjusting firmware and compiling bits and bobs etc, but the easiest way to get it up and running on Fedora is using rpmfusion.

You can do it by downloading a bunch of rpms and stuffing around with USB drives, but its way easier if you setup network access first via either a thunderbolt ethernet adapter (make sure its plugged in before starting up as hotplugging thunderbolt doesn’t work under Linux), or via bluetooth.  The bluetooth connection can either be to a mobile phone sharing its data connection or if you have another Mac around, it can share its wifi network over bluetooth (turn on Internet Sharing in the Sharing settings panel).

Once you have network access, run a yum update so you have the latest packages from fedora. It didn’t work for me with the plain Fedora 19 install.

Then go to rpmfusion.org and install first the “RPM Fusion free for Fedora 19″ then the “RPM Fusion nonfree for Fedora 19″ RPMs.

Finally, run ‘sudo yum install broadcom-wl’.  After a reboot Linux should come back up with wifi working.

UPDATE (2014-01-13): I’ve found that each time a kernel upgrade comes through I’ve had to uninstall then reinstall broadcom-wl which is kind of annoying. This time round I’m experimenting with installing ‘kmod-wl’ which brings in broadcom-wl as a dependency but I’m hoping does better at tracking kernel updates.

Also worth noting that this approach continues to work with Fedora 20.

') ?>

January 13, 2014 07:55 AM

"; } ?>

January 09, 2014

Adrian SuttonAre iPads and tablets bad for young children?

The Guardian: Are iPads and tablets bad for young children?

Kaufman strongly believes it is wrong to presume the same evils of tablets as televisions. “When scientists and paediatrician advocacy groups have talked about the danger of screen time for kids, they are lumping together all types of screen use. But most of the research is on TV. It seems misguided to assume that iPad apps are going to have the same effect. It all depends what you are using it for.”

It all depends what you are using it for. I can’t think of a better answer to any question about whether a technology is good or bad. Kids spending time staring at an iPad watching a movie probably isn’t giving them much benefit apart from some down time to have a break, but sitting with your child playing games or reading stories on the iPad has many great benefits.

As a parent, I sometimes find this unsettling. But I try to be mindful that it is an open question whether it is unsettling because there is something wrong with it, or because it wasn’t a feature of my own childhood.

We’re often unaware of how strongly we are biased towards the way we were brought up. People who grew up in a family with two children generally want to have two children themselves. People who grew up on a farm think its important for their kids to get experience on a farm etc. Even when you’re aware of that, it’s easy to forget it works the other way too – you may view certain activities as undesirable for your children purely because you didn’t do them in your childhood.

So what should a parent who fears their child’s proficiency on a tablet do? … “You need to acquire proficiency,” she says. “You can acquire it from them. They can teach you.”

This is probably the best advice in the entire article. Don’t be afraid of doing things with your child just because you aren’t familiar with them or confident in how to do them. Discovering new things together or having your child teach you something is one of the best ways for you both to learn and grow as people.

Finally, regarding the case of a four year old who was supposedly addicted to iPad use: 

that “case”, so eagerly taken up by the tabloids, comprised a single informal phone call with a parent, in which <the doctor> gave advice. There was no followup treatment. He doesn’t believe that “addiction” is a suitable word to use of such young children.

So don’t believe everything you hear in the media…

') ?>

January 09, 2014 10:40 AM

"; } ?>

January 06, 2014

Adrian Sutton“Proper” Scrolling Direction in Linux on MacBook Pro

If you’ve gotten used to your trackpad scrolling the same way as on iOS and (by default) OS X but you’re using Linux you’ll want to go to the “Mouse & Touchpad” settings panel and tick “Content sticks to fingers”.

Yeah I know,shockingly simple but for whatever reason I had assumed “Content sticks to fingers” related to some weird drag and drop system…

') ?>

January 06, 2014 06:19 AM

"; } ?>

January 03, 2014

Blue HackersBlueHackers @ linux.conf.au 2014 Perth

BlueHackers.org is an informal initiative with a focus on the wellbeing of geeks who are dealing with depression, anxiety and related matters.

This year we’re more organised than ever with a number of goodies, events and services!

- BlueHackers stickers

- BlueHackers BoF (Tuesday)

- BlueHackers funded psychologist on Thursday and Friday

- extra resources and friendly people to chat with at the conference

Details below…

This year, we’ll have a professional psychologist, Alyssa Garrett (a Perth local) funded by BlueHackers, LCA2014 and Linux Australia. Alyssa will be available Thursday and part of Friday, we’ll allocate her time in half-hour slots using a simple (paper-based) anonymous booking system. Ask a question, tell a story, take time out, find out what psychology is about… particularly if you’re wondering whether you could use some professional help, or you’ve been procrastinating taking that step, this is your chance. It won’t cost you a thing, and it’s absolutely confidential. We just offer this service because we know how important it is! There will be about 15 slots available.

You can meet Alyssa on Tuesday afternoon already, at the BoF. Just to say hi!

The booking sheet will be at the BoF and from Wednesday near the rego desk.

The BlueHackers BoF is on Tuesday afternoon, 5:40pm – 6:40pm (just before the speakers dinner). Check the BoF schedule closer to the time to see which room we’re in. The format will be similar to last year: short lightning talks of people who are happy to talk – either from their own experience, as a support, or professional. No therapy, just sharing some ideas and experience. You can prep slides if you want, but not required. Anything discussed during the BoF stays there – people have to feel comfortable.

We may have some additional paper resources available during the conference, and a friendly face for an informal chat for part of the week.

Every conference bag will have a couple of BlueHackers stickers to put on your laptop and show your quiet support for the cause – letting others know they’re not alone is a great help.

If you have any logistical or other questions, just catch me (Arjen) at the conference or write to:  l i f e (at) b l u e h a c k e r s (dot) o r g

') ?>

January 03, 2014 08:08 AM

"; } ?>

January 02, 2014

Blue HackersHow emotions are mapped in the body

Researchers found that the most common emotions trigger strong bodily sensations, and the bodily maps of these sensations were topographically different for different emotions. The sensation patterns were, however, consistent across different West European and East Asian cultures, highlighting that emotions and their corresponding bodily sensation patterns have a biological basis.

') ?>

January 02, 2014 10:54 PM

"; } ?>

December 23, 2013

Blue HackersFeet up the Wall

I often spend 5-30 minutes a day with my feet up the wall. What’s going on in this pose? Your femur bones are dropping into your hip sockets, relaxing the muscles that help you walk and support your back. Blood is draining out of your tired feet and legs. Your nervous system is getting a signal to slow down. Stress release and recovery time. This position is great for sore legs, helps with digestion & circulation as well as thyroid support. If you suffer from insomnia try this before bed. I’ve done this at times but at the time never thought through why it might be beneficial. Worth a try! And as they say, it doesn’t hurt to try – but of course it could and if it does hurt, obviously stop straight away.') ?>

December 23, 2013 11:58 PM

"; } ?>

December 19, 2013

Daniel DevineTLDR: Getting a SSL Certificate Fingerprint in Python

For a project I am working on I need to know the SHA1 hash (the fingerprint) of a DER encoded (binary) certificate file. I found it strange that nobody had offered up a here's a hunk of code for something so simple. Any novice programmer would probably eventually arrive at this solution on their own but for the sake of those Googling, here it is.

Read more…

') ?>

December 19, 2013 05:21 AM

"; } ?>

December 17, 2013

Ben MartinThe kernel of an Arduino audio player

other formats. So naturally I took the well trodden path and decided to use an arduino to build a "small" audio player. The breadboard version is shown below. The show is driven by the Uno on the right. The vs1063 is on a Sparkfun breakout board in the top of image. The black thing you see to the right of the vs1063 is an audio plug. The bottom IC in the line on the right is an attiny84. But wait you say, don't you already have the Uno to be the arduino in the room? But yes I say, because the attiny84 is a SPI slave device which is purely a "display driver" for the 4 bit parallel OLED screen at the bottom. Without having to play tricks overriding the reset pin, the tiny84 has just enough for SPI (4), power (2), and the OLED (7) so it's perfect for this application.

The next chip up from the attiny84 is an MCP23017 which connects to the TWI bus and provides 16 digital pins to play with. There is a SPI version of the muxer, the 23S17. The muxer works well for buttons and chip selects which are not toggled frequently. It seems either my library for the MCP is slow or using TWI for CS can slow down SPI operations where selection/deselection is in a tight loop.

Above the MCP is a 1Mbit SPI RAM chip. I'm using that as a little cache and to store the playlist for ultra quick access. There is only so much you can do with the 2kb of sram on the arduino. Above the SPI ram is a 4050 and SFE bidirectional level shifter. The three buttons in bottom left allow you to control the player reasonably effectively. Though I'm waiting for my next red box day for more controller goodies to arrive.

I've pushed some of the code to control the OLED screen from the attiny up to github, for example at:
https://github.com/monkeyiq/attiny_oled_server_spi2
I'll probably do a more just write up about those source files and the whole display driver subsystem at some stage soon.

') ?>

December 17, 2013 12:47 PM

"; } ?>

December 16, 2013

Adrian SuttonWhy Are My JUnit Tests Running So Slow?

This is mostly a note to myself, but often when I setup a new Linux install, I find that JUnit tests run significantly slower than usual. The CPU is nearly entirely idle and there’s almost no IO-wait making it hard to work out what’s going on.

The problem is that JUnit (or something in our tests or test runner) is doing a DNS lookup on the machine’s hostname. That lookup should be really fast, but if you’re connected to a VPN it may search the remote DNS server which takes time and makes the tests take much longer than they should.

The solution is to make sure your hostname is in /etc/hosts pointing to 127.0.0.1 (and that /etc/hosts is searched first but it usually is). Then the lookup is lightning fast and the tests start maxing out the CPU like they should.

') ?>

December 16, 2013 11:15 PM

"; } ?>

December 09, 2013

Ben MartinAsymmetric Multiprocessing at the MCU level


<iframe allowfullscreen="" frameborder="0" height="281" mozallowfullscreen="" src="http://player.vimeo.com/video/81094386" webkitallowfullscreen="" width="500"></iframe>
Driving an OLED screen over SPI using an attiny84 as a display driver from Ben Martin on Vimeo.

Shown in the video above is the attiny84 being used as a display driver by an Arduino Uno. Sorry about having the ceiling fan on during capture. On the Uno side, I have a C++ "shim" class that has the same interface as the class used to drive the OLED locally. The main difference is you have to tell the shim class which pin to use as chip select when it wants to talk to the attiny84. On the attiny commands come in over SPI and the real library that can drive the OLED screen is used to issue the commands to the screen.

The fuss about that green LED is that when it goes out, I've cut the power to the attiny84 and the OLED. Running both the later with a reasonable amount of text on screen uses about 20mA, with the screen completely dark and the tiny asleep that drops to 6mA. That can go down to less than 2mA if I turn off the internal power in the OLED. Unfortunately I haven't worked out how to turn the power back on again other than resetting the power to the OLED completely. But being able to drop power completely means that the display is an optional extra and there is no power drain if there is nothing that I want to see.

Another way to go with this is using something like an MCP23S17 chip as a pin muxer and directly control the screen from the Uno. The two downsides to that design are the need to modify the real OLED library to use the pin muxer over SPI, and that you don't gain the use of a dedicated MCU to drive the screen. An example of the later is adding a command to scroll through 10 lines of text. The Uno could issue that command and then forget about the display completely while the attiny handles the details of scrolling and updating the OLED.

Some issues of doing this were working out how to tell the tiny to go into SPI slave mode, then getting non garbage from the SPI bus when talking to the tiny, and then working out acceptable delays for key times. When you send a byte to the tiny the ISR that accepts that byte will take "some time" to complete. Even if you are using a preallocated circular array to dispense with the new byte as quickly as possible, the increment and modulo operations take time. Time that can be noticeable when the tiny is clocked at 8mhz and the Uno at 16mhz and you ramp up the SPI clock speed without mercy.

As part of this I also made a real trivial SPI calculator for the attiny84. By trivial I mean store, add, and print are the only operations and there is only a single register for the current value. But it does show that the code that interacts with the SPI bus on the client and server side gets what one would expect to get from a sane adding machine. I'll most likely be putting this code up on github once I clean it up a little bit.

') ?>

December 09, 2013 03:50 AM

"; } ?>

Paul GearonDDD on Mavericks

Gherkin. However, after merging in a new feature the other day, I caused a regression (dammit).

Until now, I've been using judiciously inserted calls to echo to trace what has been going on in Gherkin. That does work, but it can lead to bugs due to interfering with return values from functions, needs cleanup, needs the code to be modified each time something new needs to be inspected, and can result in a lot of unnecessary output before the required data shows up. Basically, once you get past a particular point, you really need to adopt a more scalable approach to debugging.

Luckily, Bash code like Gherkin can be debugged with bashdb. I'm not really familiar with bashdb, so I figured I should use DDD to drive it. Like most Gnu projects, I usually try to install them with Fink, and sure enough, DDD is available. However, installation failed, with an error due to an ambiguous overloaded + operator. It turns out that this is due to an update in the C++ compiler in OSX Mavericks. The code fix is trivial, though the patch hasn't been integrated yet.

Downloading DDD directly and running configure got stuck on finding the X11 libraries. I could have specified them manually, but I wasn't sure which ones fink likes to use, and the system has several available (some of them old). The correct one was /usr/X11R6/lib, but given the other dependencies of DDD I preferred to use Fink. However, until the Fink package gets updated then it won't compile and install on its own. So I had figured I should try to tweak Fink to apply the patch.

Increasing the verbosity level on Fink showed up a patch file that was already being applied from:
/sw/fink/dists/stable/main/finkinfo/devel/ddd.patch

It looks like Fink packages all rely on the basic package, with a patch applied in order to fit with Fink or OS X. So all it took was for me to update this file with the one-line patch that would make DDD compile. One caveat is that Fink uses package descriptions that include checksums for various files, including the patch file. My first attempt at using the new patch reported both the expected checksum and the one that was found, so that made it easy to update the .info file.

If you found yourself here while trying to get Fink to install DDD, then just use these 2 files to replace the ones in your system:
  /sw/fink/dists/stable/main/finkinfo/devel/ddd.info
  /sw/fink/dists/stable/main/finkinfo/devel/ddd.patch

If you have any sense, you'll check that the patch file does what I say it does. :)

Note that when you update your Fink repository, then these files should get replaced.
') ?>

December 09, 2013 12:34 AM

"; } ?>

December 03, 2013

Adrian SuttonChris Hates Writing • Small things add up

Chris Poole – Small things add up:
By migrating to the new domain, end users now save roughly 100 KB upstream per page load, which at 500 million pageviews per month adds up to 46 terabytes per month in savings for our users. I find this unreal.
Just by moving images to a domain that doesn’t have cookies. Impressive, especially given that users upload data significantly slower than they download it and HTTP 1.1 headers are sent uncompressed. Important to note however:
Are these optimizations the first place to start? No, certainly not—the reason we made them is we had exhausted almost every other page performance trick. But at the scale of large sites like 4chan (especially those run on a shoe-string budget!), it’s important to remember: little things do add up.
Very few sites are at the scale of 4chan, but I wonder how fast cookie traffic adds up for sites that use things like long poll.') ?>

December 03, 2013 01:04 AM

"; } ?>

November 25, 2013

Adrian SuttonDeveloping First Class Software with the Developers that Google Rejects

Ron Qartel: Developing First Class Software with the Developers that Google Rejects

Focus on building a great team and a great way to develop, not on hiring individual hotshots. I would caution however that XP isn’t necessarily the “great way to develop” that will work best for your team and your circumstances, but its certainly a good starting point. Just remember that the one key Agile principal is to continuously improve the way you work. So don’t just follow “the rules”.

') ?>

November 25, 2013 11:43 PM

"; } ?>

Adrian SuttonRewrote {big number} of lines of {old language} in {small number} of lines of {hip new language}

There’s lots of projects these days moving from one language to the next. Whether that’s a good idea or not varies but let’s accept that it was the right choice and the world is a better place for it. One thing really bugs me: inevitably justifications of how successful that move has been includes a claim that the number of lines of code were so significantly reduced.

We rewrote 1.5 million lines of Java in just 6,000 lines of haskell!

The old system was 200k of tangled Java but the new system is just 4000 lines of clojure!

There are two things about these claims that bug me:

  1. Lines of code is a terrible metric – how have we not learnt that yet?
  2. Its incredibly rare to do a complete system rewrite and actually build the same thing.

Inevitably the big gains in a rewrite come from a better understanding of business requirements so the new system actually does less stuff. Just because it meets all the same business requirements and maybe even looks the same to users doesn’t mean its doing all the same things or providing the same level of configurability or flexibility. That reduction in scope is what makes it a better system.

Flexibility and configurability is only an asset if you’re actually using it. In all other cases its just waste and should be removed.

') ?>

November 25, 2013 11:43 PM

"; } ?>

November 20, 2013

Blue HackersGo Home on Time Day | 20th Nov 2013

Mark and then sync all your calendars – Wednesday 20 November is this year’s national Go Home on Time Day. Go Home on Time Day is an annual initiative of The Australia Institute, in partnership with beyondblue. The Day is a light-hearted way to start a serious conversation about work-life balance.

') ?>

November 20, 2013 12:46 AM

"; } ?>

November 16, 2013

Ben MartinSparkfun vs1063 DSP breakout

Sparkfun vs1063 breakout gives you a vs1063 chip with a little bit of supporting circuit. You have to bring your own microcontroller, sdcard or data source, and level shifting.


One thing which to my limited knowledge seems unfortunate is that the VCC on the breakout has to be 5v. There are voltage regulators on the vs1063 breakout which give it 3.3v, 2.8v and 1.8v. Since all vregs are connected to VCC and it wants to make its own 3v3 then I think you have to give 5v as VCC on the breakout board.

With the microsd needing to run on 3.3v I downshifted the outbound SPI ports, the sdcard chip select, and the few input pins to the vs1063 board. Those are the two little red boards on the breadboard. The sdcard is simply on a breakout which does no level shifting itself. The MISO pin is good to go without shifting because a 3.3v will trip as high on a 5v line. Likewise the interrupt pin DREQ which goes to pin 2 on the Uno doesn't have any upshifting.

I had a few issues getting the XDCS, XCS, and DREQ to all play well from the microcontroller. A quick and nasty hack was to attach that green LED in the middle of the photo to the interrupt line so I could see when it was tripped. During playback it gives a PWM effect as 32byte blocks of data are shuffled to the vs1063 as it keeps playing. The DREQ is fired when the vs1063 can take at least another 32 bytes of data from the SPI bus to it's internal buffer. Handy to have the arduino doing other things and also servicing that interrupt as needed.

I'm hoping to use a head to tail 3v3 design for making a mobile version of the circuit. I would have loved to use 2xAA batteries, but might have to go another way for power. Unfortunately the OLED screen I have is 5v but there are 3v3 onces of those floating around so I can get a nice modern low power display on there.

The next step is likely to prototype UI control for this. Which I'll probably use the 5v OLED in the meantime to get a feel for how things will work. I get the feeling that an attiny might sit between the main arduino and the parallel OLED screen so it can be addressed on the SPI bus too. Hmm, attiny going into major power save mode until chip selected back into life.

') ?>

November 16, 2013 03:08 AM

"; } ?>

November 13, 2013

Tony BilbroughDay 8 Journalism – An intensive class wrap up

The fun is all had, the story wrote,

There’s smiles and tears we cannot quote,

Now new friends depart this week.

It’s all about the thing we learned,

with so much more to seek.

 

Not quite Haiku, or nearly Welsh, but it is definitely mine

 Ian Skidmore’s final Blog, a grand farewell to us all

http://skidmoresisland.blogspot.com.au/

 Just one of the eulogies by his peers, in Gentlemen Ranters

http://www.gentlemenranters.com/

 So the course is over, three years jammed into a few very, very fast and furious days.

And with this end, decisions must be made……. whether to continue learning more about journalism, and to a lesser extent, whether I have improved writing skills enough to continue the blog.

Then if a blog is to continue, what could the time scale be between ‘writes’, from a practical perspective?

 Blogging daily, as done during the course, has really severe drawbacks socially.

With a rather limited journalistic ability, one needs many hours to assemble thoughts of events, or activity, to get them into some sort of linear order.

The alternatives are either; loose those three or four hours of social activity time, or as in the case of this exercise, loose sleep time!

Four hours sleep each night on a continuous basis, just would not work for any extended period. But short bursts like this for a week or two might work ok, I guess .

 Much more importantly though – what did I get out of this course, in terms of writing ability?

Well unequivocally, a lot.

But for the sum of what I learned to work effectively, I would prolly need to become a lot more active in the writing side of at least some of the social groups I am active with.

This would be a clear fork in life’s path, and my guess is that this is another of those ‘dither points’. Ones that will oscillate continuously until someone or something gives direction more than just a nudge.

 Looking back, this final day in class came on with a bigger rush, than any adrenalin junkie could describe.

We participants had used so many different formats to gather data, and tell a story for the final Journal.

A mix of photographs, research through the Internet, phone calls recorded (with permission, of course), Vox Pop interviews, and recollection interviews, all with professional results.

 I was so lucky that my story was edited first, while minds were fresh, and every one looked relaxed and beautiful.

But as the day wore on, we were absolutely stunned by the the amount of editing work needed on our stories, Think for a moment – a story like this that takes you only a few minutes to read, took our editor an hour or more to sort out to an acceptable stage, for the lay out process to begin.

I was asked by one of the other students to include this next bit of inf on my story, as example of what editors have to go thru when passing a story up the line for publication.

Superfluous commas – thirty nine, syntax errors – nine, extra spaces – 28, missing full stops – three, two repetitive statements, an out of place piece in a paragraph. There was even a typo, in-spite of the spell checker!

Other writers also had the order of the story a little crossed up, or parts of the tale needing more clarification.

Interesting to me was that all the stories had similar editing problems, and every single one of us believed that we had submitted a perfect job to the editor. Sigh

 Summarising our course of ‘Three years of Journalism – in eight days’, and to show our educators that we did really listen most of the time :

Always use ‘Who did What’ with attributed statements.

Always think, Who Where What When Why and How, to write

Abide by the law, its there, so accept.

Act ethically, be fair and meet audience expectations.

Build, and be a part of your community.

Be Credible, be able to back up your statements,

And finally, check your Grammar, and check your Grammar, and check……

 

Thank you Ursula and Bec and … for your time and patience with us all.

You know we all had a great time, finding we could do so much of what we had believed was impossible to achieve on day one, and reaching an understandable end for day eight.

You are a great teacher, and you did it well, Missus.

 Gentlemen Ranters tend to say, “if it isn’t totally accurate, at least it is accurate enough…”


') ?>

November 13, 2013 01:25 PM

"; } ?>

November 12, 2013

Tony BilbroughDay 7 Bringing on the blooming plants

It rained again last night, and it looks like the gardener will have to service the Lawn mower before long.

Ha, one can only dream!

 Had a thought about this course on my way to The Edge, in a carriage on a train.

I will probably never be able to watch another news story unfold with out thinking of the cut-aways, voice overs and other artifacts, that are used to create a more interesting television tale.

 Now, as they say in the big world, ‘The story is wrote, the tale is told’, and the early part of the day was spent having the aforementioned scrutinised for Grammar lapses, Glaring omissions and Gloop like rubbish, by our long suffering coordinator.

The best bit was that The Piece was not completely discarded into that big dustbin, bottom left of screen (on an Ubuntu operating system). It survived with just a modest little rewrite.

 So, with a huge sigh of relief, the remaining hours were spent making the suggested changes and generally tarting up the script, and finally making it LOOK as if it had some potential to be turned into a printed story, on the morrow.

Fail Again…. Never shout words with capital letters, see?

 This was polishing tartiness in the truest sense. I was amazed at how many punctuation errors there were. Many of them were caused because those sorts of punctuation had been totally erased Terminator style, by the long thin cane of school days.

Not talking about forgetting full stops or commas, but rather the correct use for placing bracket types, hyphens, colons and semi colons. And I know a Grammar Nazi who loves nothing better than……..

 There was a lunch break to day – for some. But most of us spent the time huddled in desperation over our laptops, trying to have the finished version out by Noon +30 minutes.

 The afternoon session turned into a great surprise (well almost everything turns into a surprise for me, I forget so quickly).

Intensive Marketing. A lot the discussion related to our earlier work, on the various methods of defining and maintaining a consistency in Profiles, Marketing and measuring the effectiveness of a campaign throughout a given event using a variety of tools that I will not expand upon, because the unloved and unclean won’t have the slightest idea what I am writing about. The last sentence a bit of a lung-full, used to get the taught bit over as quickly as.

 How the instructor managed to shoe horn so much more usable data into our already saturated brains, one can only live in amazement and wonder.

 The days wrap up was clear and concise, and in precisely the language I understand.

Do the final layout tomorrow morning, and we will all party on, thru afternoon and night.

 Had a nice little interlude afterwards, when one of the course members came home with me, to take a few photos of the Native Bee’s, living in an old tree trunk down the bottom of the garden.

I hope that some of the photo’s work, but the bee’s were not very cooperative.

 The sunshine departs, with no sign of rain tonight, so I’m off to have a 7 kilometre run with Brisbane Southside Hash House Harriers tonight

http://www.brisbanesouthside.com/

 Awesome, I finally got a Website tag into this blog. Learning, all about learning, you know.

 


') ?>

November 12, 2013 09:14 AM

"; } ?>

Blue HackersThe Donkey in the Well

November 12, 2013 03:15 AM

"; } ?>

November 11, 2013

Tony BilbroughDay 6, Confusion and Confessions

I must be sending the wrong tributes to our Rain Gods. Only 4 mm in the gauge for the whole weekend. Rubbing salt in, Emergency Services sent me 3 SMS warnings that my home was about to be pummelled by hail, and left awash with gurgling drains.

Can’t help wondering if the discussions held between self, and the scrawny, hair-suit rain god, is being transmitted at the same rate as my ISP, Telstra Bigpond Cable.

Meaning that the last message to the afore said rain god was in January this year, and might only just been received.

That message read “slow the bloody rain down, you miserable sod, or the fruit on my grape vines will get a dreaded botrytis”

 Before entering class I paused and reflected on the importance today, beside the fallen Elephant. Remembering…. Remembering the mates we all had, who never quite made it back from our various conflicts.

The eleventh Hour, of the eleventh Day, of the eleventh Month.

Until next year then, my living friends.

  Image

 

Now, this is important too – To day I learned with more than a little dismay, that I had completely misunderstood what I was supposed to do over the weekend.

Teacher said we were to get all the research and information for the final story, work it into something usable, so that the remaining time could be spent making it readable.

Definitely not what I thought I heard, when talking to Gazza about his coffee machine the other night.

 Its now a little after 11pm on Monday evening, I have just caught up with where I was supposed to be, by that time last night.

Back to the Blog of today, and think of all I should have absorbed, for disgorgement to digital word, tonight.

As an aside, I now have a certain empathy for the Goose, that produced the Pate de fois gras I will be consuming at Thursday’s Beefsteak and Burgundy luncheon. I am sure I better understand now, how forced feeding works.

One of the key issues in writing, is getting a story read by a wide audience, and text alone is the least intimate form for getting the message across.

So, how to improve the situation?

Well for start there needs to be a caption, to grab the reader and create a need to read on.

A bit of a Kicker.

The introduction should paint the overall message of the story we are trying to tell – a sort of precis, but interesting, and holding out the main issue, for all to see.

Always try to place a picture, or map, or some visual artefact close to the banner, but do make sure that it does not impinge on other stories close by.

If using quotes, and a good story should have at least three, make sure you have some medium to verify that what you have written, is accurate.

If you are not able to make that recording read what you have written back to the Talent, and have him/her confirm it is correct. While its not the best solution, it’s better that not having a quote at all.

Always do your research for for legal implications, and introduce clearly, any legal matters in the story.

 Use the software Murally, to create the story line with sticky notes – I am a long way from using that right now and feel it may be some another incarnation before any understanding of the usefulness takes hold. Pinned up there with the usefulness of Twitter.

Things not to do when writing

Don’t use an Acronym without first elucidating– I used HUMBUG, in the as yet unpublished story, then went to type out what it stood for [its a computer group I have hung out with for about 15 years] I was stunned when I found I had to go back to the web site to find what the letters really stood for. I like to think that in this particular case I was having a bit of a senior moment [like loosing my car keys for the last 3 days]

Moving right along ….

Do not use slang or jargon unless writing to a niche group [Like the HUMBUG group?]

Remember your audience – hmmmm, mine is very small, and all very polite.

Don’t assume that your audience knows what you have just said or written – I can see a pattern coming up here. I’m no longer sure what I just wrote, either.

 With Interviews to camera

One needs good footage, use cutaways to enhance the story, eye witnesses can be good, even when they are not sure what they have seen? What?

Check sound – you cant get away with poor sound. I have found Truth at last.

 For Elements of an Audio story

Use a good voice for the presenter, get sound grabs of relevant noises, use atmospheric noise if it fits with the story line. Use music to set a mood, but be aware of copyright restrictions

And finally -

Not all stories will have these elements.

 Somewhere about this stage of the lectures, I found that I had left the laptop battery charger at home, and the warning light was blinking. So I made my excuses and bolted for an early train.

The rest of the tale will unfold …. Later, I’m sure

 Good night all.

PS. I’m sure that all will have noticed, that this blog has broken almost ever single guide line we were given today.

Think Images, Links and a Cosmic picture at ‘The End’.


') ?>

November 11, 2013 02:29 PM

"; } ?>

November 10, 2013

Ben MartinRePaper 2.7 inch epaper goodness from the BeagleBone


So the first image I chose to display after the epd_test was a capture of fontforge editing Cantarell Regular. Luckily, I've made no changes to the splineset so my design skills are not part of the image. The rendering of splines in the charview of fontforge uses antialiasing, as it was switched over to cairo around a year ago. As the eInk display is monochrome the image displayed is dithered back to 1 bit.

With the real time collaboration support in fontforge this does raise the new chance to see a font being rendered on eInk as you design it (or hint it). I'm not sure how many fonts are being designed with eInk as the specific consumption ground. If you are interested in font design, checkout Crafting Type which uses fontforge to create new type, and you should also be able to see the collaboration and HTML preview modes in action.

Getting the actual eInk display to go from the BeagleBone had a few steps. Firstly, I managed to completely fill up the 2gb of eMMC where my Angstrom was installed. So now I'm running the whole show off a high speed 8gb sandisk card. I spent a little extra cash on a faster card, its one of the extreme super panda + extra adjective sandisk ones. The older kernel I had didn't have a duty file for the PWM pin that the driver wanted to use. Now I that I have a fully updated beaglebone black boot area I have that file. FWIW I'm on kernel version 3.8.13-r23a.49.

Trying out the epd_test initially showed me some broken lines and after a little bit what looked like a bit of the cat from the test image. After rechecking the wireup a few times I looked at the code and saw it was expecting a 2 inch screen. That happens in a few places in the code. So I changed those to reflect my hardware. Then the test loop ran as expected!

The next step was getting the FUSE driver installed (change for size needed too). Then the python demos could run. And thus the photo above was made. My next step is to create a function to render cairo to /dev/epd/display in order to drive the display directly from a cairo app.

A huge thank you to rePaper for making this so simple to get going. The drivers for Raspberry and Beagle are up on their github page. I had been looking at the Arduino driver and it's SPI code thinking about porting that over to Linux, but now that's not necessary! I might design some cape love for this, perhaps with a 14 pin IDC connector on it for eInk attaching. Shouldn't look much worse than last night's SPI only monster, though something etched would be nicer.



The 2.7 inch changes are below, the first one is just slightly more verbose error reporting. You'll also want to set EPD_SIZE=2.7 in /etc/init.d/epd-fuse.

diff --git a/PlatformWithOS/BeagleBone/gpio.c b/PlatformWithOS/BeagleBone/gpio.c
index b3ded6f..d1df3df 100644
--- a/PlatformWithOS/BeagleBone/gpio.c
+++ b/PlatformWithOS/BeagleBone/gpio.c
@@ -767,7 +767,7 @@ static bool PWM_enable(int channel, const char *pin_name) {
                                usleep(10000);
                        }
                        if (pwm[channel].fd < 0) {
-                               fprintf(stderr, "PWM failed to appear\n"); fflush(stderr);
+                               fprintf(stderr, "PWM failed to appear pin:%s file:%s\n", pin_name, pwm[channel]
                                free(pwm[channel].name);
                                pwm[channel].name = NULL;
                                break;  // failed
diff --git a/PlatformWithOS/demo/EPD.py b/PlatformWithOS/demo/EPD.py
index da1ef12..41cc6c1 100644
--- a/PlatformWithOS/demo/EPD.py
+++ b/PlatformWithOS/demo/EPD.py
@@ -48,8 +48,8 @@ to use:

     def __init__(self, *args, **kwargs):
         self._epd_path = '/dev/epd'
-        self._width = 200
-        self._height = 96
+        self._width = 264
+        self._height = 176
         self._panel = 'EPD 2.0'
         self._auto = False

diff --git a/PlatformWithOS/driver-common/epd_test.c b/PlatformWithOS/driver-common/epd_test.c
index e2f2b5a..afe3cb8 100644
--- a/PlatformWithOS/driver-common/epd_test.c
+++ b/PlatformWithOS/driver-common/epd_test.c
@@ -72,7 +72,7 @@ int main(int argc, char *argv[]) {
        GPIO_mode(reset_pin, GPIO_OUTPUT);
        GPIO_mode(busy_pin, GPIO_INPUT);

-       EPD_type *epd = EPD_create(EPD_2_0,
+       EPD_type *epd = EPD_create(EPD_2_7,
                                   panel_on_pin,
                                   border_pin,
                                   discharge_pin,
') ?>

November 10, 2013 05:29 AM

"; } ?>

November 09, 2013

Tony BilbroughDay four, The interview and Interviewee

Gazza Curtis, is geek, a dedicated dabbler in all forms of electronic circuitry, and most importantly for today’s interview, a barista extra ordinaire.

 I am not too sure how to present a conversation on paper – oops, as an electronic, digital, type face – so have used an[*] for the question and [-] for the answer.

So let ‘Understanding one man’s view on coffee’ unfold.

 

 Image

 

* Just what is it you like about your coffee

- Well, let me say quite firmly, I am not a coffee snob, even thought my daughter is. I have always enjoyed coffee over other beverages, and have had many different coffee makers. Two years back I bought the standard Aldi model for about $40, and have stayed with the brand ever since.

The actual sachets work out to only 35c each, so it makes a s reasonably cheap cup. And it tastes great, he added.

*What is the difference between a cup instant and a freshly ground bean coffee?

- well the pod type always tastes fresh because it’s only opened moments before the liquid gets forced thru it. A jar of Instant 42 bean coffee degrades from the moment the jar is opened, and continues to loose its fresh flavour from that moment on. Well, in my view, that is.

*How many different styles of ground coffee bean can you identify

-well I have my preferred style, at the moment my favourite pod is the Expressi Abruzzo. But there are many more styles to choose from. We will be tasting the Tauro and Perugia which are mid strength brews, while the Reggio and Colombia rate a little higher at 8, the Abruzzo is very strongly caffeinated, and rated at 12. At the other end of the scale you could try a Decaf, which is only rated at 2 or a Florenzi at 3.

 

Image

*Do you have to have coffee at the beginning of each day to function normally

- absolutely, I have the same type of coffee maker at work as well! So I know I can always drink a consistent style of coffee.

*Do you think coffee makes one constipated?

- Well it is a diuretic. I have had people tell me it does, but personally never experienced that type of problem. [The rest of this conversation was edited out, in case my Grand children read this blog]

*Which country do you think produces the best tasting coffee bean?

- I think that almost any Arabica beans taste great under certain circumstances, particularly if well roasted. At one end of the scale, the Somali method of roasting the endosperm laden beans in a pan over an open fire will obviously lead to an inconsistency in flavour, but that might well be a part of the character of their particular style, Not that I am biased, but the coffee grown inland from Airlie Beach, and up in the Atherton Tablelands taste quite spectacular. Their production is very limited, so that sort of knowledge is best kept a secret. Oh hell, did I just say that?

*Do you suffer withdrawal symptoms if you go without coffee for more than a day

- not as much as I used to.

*Is it very difficult to operate this particular machine?

- Its really quite simple. There are just 7 points to remember

.make sure there is water in the jug

.turn the power on

.put a mug under the spout and press the flush button, it looks like a shower spray!

.drop your chosen pod into the slot, but make sure it is located correctly.

.now all you have to do is choose if you want a full cup or half, and select that particular button.

.if you want milk, warm the mug and milk in a microwave, before putting under the spout, and you will get a very nice frothy brew.

 

We wander off to talk about visiting Vietnam, quadrocopters, the next Linux conference in Perth, while sipping strong black coffee.

Thank you Gazza

Interview event ends at 22.00 hrs.

 

 Lessons learned from doing my first interview.

 I was surprised that it took only a little time to think about the questions I needed to ask, and even more surprised to find that in general people really do enjoy talking about their interests. Where I have come unstuck on both attempts, is in placing the questions to the Talent in the correct order, so the narrative flowed a little more clearly.

Start the questions gently, and leave the Talent plenty of time to reflect on what they want to say. I noticed that after a moments pause Gazza would add another interesting point to fill out what he meant.

Leave the tough or ‘naughty’ questions to the end, in case the Talent spits the dummy, or worse, looses interest in talking to you.

And Most Importantly of all, treat coffee tasting in the same manner as Wine tasting

Sip and spit.

Last night I drank far too many different styles of coffee, while discussing the character of each pod, so that after getting home around midnight, the will to sleep was long gone and the brain remained buzzing away in front of the TV for several hours, trying to wind down!

Not a great start for the 30km bike ride on Sunday morning!


') ?>

November 09, 2013 10:01 PM

"; } ?>

November 08, 2013

Tony BilbroughDay Three, Channel 10 Television is alive and well

I have discovered that there were a lot of people in our world that actually savour coffee, and experiment with the roasted bean flavours and styles, in much the same way that others do with grape varieties in wine, or hop and wheat in beers.

So this morning on the way to The Edge, I thought to count the number of coffee shops, between Roma Street Station [no photography allowed] and our class, a distance of about one kilometre. Amazed to find 14 shops or stalls, all open for business at 9 am, and all seemed to be able to compete successfully! Good grief, Charlie Brown, what does Guinness know about this.

 Image

 

So made a point of getting in earlier today, to have a Long Black before the learning began.

 

Most of the class day was spent preparing for the weekend assignment – To interview one or two people, then write a story based on the questions asked at the interview.

Now I know all the work so far has been leading up to this, but as the lectures unfolded a vacuum seemed to grow ever bigger inside my head. Been thinking ‘bloody hell, how to find a topic’, where to begin to do research on a topic that is still a vague mist in my mind. Oh Wikki, you and I will be so close tomorrow.

 

So, if I had to summarize my activities today, it would fall into just three parts, or perhaps four sections. Ahh, and a little bit.

 

Spent the morning listening to others input, because I really had nothing at all to contribute.

I have realized I have virtually no concept at all, of how to think questions to do the research, to ask an interviewee [known as 'talent' in the trade].

 

AND

finding out that one needs to learn heaps more about half a dozen more software tools [do not mention them here, save for later]– and I’m not talking about the scant knowledge acquired over the last 5 days on the in-adequateness of WordPress and an unfathomable Twitter. No idea how many of my Twits are still jiggling around in the ether, but few seem to have found there way to their intended targets. Let me clarify the latter. I think that when I have worked out how to use the # and @ I might well have it all sorted. What to use now to disguise letters in naughty words? I ponder.

For the former, this has a long way to go. Each student has somehow managed to create a different version of WordPress, that has differing controls for layout and print style. Or in some cases, none at all.

For instance I can’t get my parchment style background to ‘stick’ and there seems no way to change type styles.

And this sort of thing is vital to indicate our very individual Brand of Blog. Cough cough.

 

AND

Our outing to the Channel 10 TV station at the top of Mt Coot-ha was certainly the most interesting of the three media outlets we have seen so far. We left the building with a feeling that the staff there really enjoyed working together. They were all so accessible, and are the first to  encourage us to continue with the CitizenJ concept, giving us information on how to access to their news system via various the social media they used.

I have completely changed my views on the role Channel 10 the Company, and their cheerful staff, play in our society’s quest for information.

 Image

 

For our 5pm evening break, I headed over to Archives in West End, and dropped down a few well hopped IPA’s, and began writing up the days events, while it was all fresh in my mind.

 

Early evening saw some of us heading a few hundred metres up the road to Avid Reader, for wines and nibbles, and to listen to Emma Carter’s insights at her book launching. The book, ‘Beyond the Logo’ is designed to help small to mid sized companies understand the process of creating and marketing a Brand Image in the correct sequence, and with the right types of graphic design.

Several of the CitizenJ students bought a signed copy

 

Image

 

Photography and Public Relations for the event was handled by Bridget Heinemann. Thank you Bridget, and further to our all too short talk, I know you would enjoy getting involved with CitizenJ

Looking forward to the Humbug over at UQ, tomorrow afternoon, to find suitable ‘Talent’ to Interview, for an as yet unknown Topic. Don’t stop breathing.


') ?>

November 08, 2013 01:48 PM

"; } ?>

November 07, 2013

Tony BilbroughDay Two, is there a story out there?

And it seems like I’ve been here weeks already, and I know my way around the State Library complex like an old timer, and I really don’t get lost at all.

 

Just have to go back to the elephant once more, this time to follow up on the first public comment made to my Blogg on Day One, that there was much more to the fallen statue than first appeared.

I discovered that the million dollar elephant had ended up on its nose because of a Rat. And I have to wonder how many passers by ever knew there really was a Rat out there – but on ‘The Other Side’?

It is a fairly large and handsome rat, at that. A sort of Cane Rat, or Ruttus Rattus, or perhaps even a King Rat, with its very own patch of green, green grass.

 

Image 

 

Now quickly, back to the class room to see what the day brings.

We looked at successful photo journalism styles, uses of slides with audio overlay, straight audio as well as descriptions of an event blending a mix of all three.

 

Today, for our ‘Outing’, we visited The Fairfax Newspaper offices at 420 Adelaide Street, and were shown around the Brisbane operations by Cameron, the Brisbane operations editor.

Even after the ABC visit yesterday, I still find it amazing that so much news is gathered, and later disseminated, by so few journalists. At Fairfax there were a mere handful in the entire Brisbane operation.

Honestly, one got the feeling that there were many more IT personnel at work maintaining and operating the electronic Guts in the building, than all the journalist staff using it! Only a feeling, mind you.

 Image

On our return we split into two teams. Our team worked a topic involving an editor, journalist  and sound man on one side, wanting to get a emotional story out of a bloke, who five hours earlier had reversed his car down the driveway, over their two year old child.

The other team discussed surveillance, and non disclosure by a journalist and editor, to a police official, and a Lawyer acting for an irate mother of a fourteen year old daughter, who had held a Rave? Party while mother and father were away.

 My own worst nightmare had arrived.

It was Role Play time.

No point in summarising how it all unfolded, as you had to be there to catch, and hold tight, to that roller coaster ride of polite discourse.

Suffice to say I am surrounded by a cast of superb word artists, and became enthralled by the verbal sparring that erupted, as each point was put, pushed and pulled.

 And at days half time [it was only 5pm and we all still had many more hours of review work and homework  to do] I stepped away from the train station, and came a upon a huge crime scene right in the middle of our little village.

Dozens of flood lights on stands, people with clip boards, a big thick power cable snaking back to a smelly generator, and a mass of young blokes all in working gear dashing about carrying boxes of gear in and out of four massive trucks. I instantly visualized the investigation set up from tv series like CSI New York, or Body of Evidence.

A biggie right in my own suburban back yard.

 Oh yes. I know, right now, I really really know …….. I have the ultimate ‘up’ on tomorrow nights homework. This I Truly Know. A real scoop for class tomorrow. Almost piddling with excitement. Honest Johnson.

 

Out with the Android to begin surreptitiously clicking away, until I realised that I could be more brazen and take photos of almost anything I wanted from a public foot path – another one of our lessons today.

Only five steps further down the foot path, there was a large sign.

“This Subway branch is closed all day for a National Photo Shoot. We apologise for any inconvenience”.

Oh Bloody Hell, crime story blown away, photos so wasted, and I’ll have to work like all the rest of the team, over the coming weekend.

You see, the essence of yet another of today’s lessons was, “to maintain honesty in Journalism”.


') ?>

November 07, 2013 11:37 AM

"; } ?>

November 06, 2013

Tony BilbroughDay One

What a fascinating course this has turned out to be.

Looking back on the day, I am wondering just how much I have really absorbed, and am guessing that this essay/blogg? will show, it has been written ‘on the fly’, after judging more than a dozen wines in three hours, earlier this evening.

Image

Our days course began with a walk past an elephant statue – perched on its nose.

Not perched as in Norwegian Parrot, rather Perched Precariously, as in it just fell off the back of a truck! I can’t help thinking that Terry Pratchett must have had a hand in it.

Lectures began with an outline of what we could expect to achieve, and moved on with individuals explaining why they wanted to become part of the world wide Citizen Journalists group. I had already decided that reporting on interesting events might cause readers to become involved in a range of different hobbbies/pastimes.

The really interesting part here is that one really needs to discover what ones ‘Brand’ is, in order to create a consistent style. Many hints and descriptors were discussed, and most participants seemed quite able to identify with one or two of the dozen, or so, recognised categories.

I felt a little alarmed and confused here, because a part of each and every one of those descriptors for the ‘Brand’ belonged inside me. That’s not to say I felt I was all, or even many, at any given time – rather that I could easily identify with most all ‘Brands’ at some point in time.

In my case, our networking lunch was used to learn a little more about the vagaries of Twittering, though I’m no closer to seeing where it is applicable. The following weeks will tell, I’m sure. Thank you Elizabeth.

The tour thru the ground floor of the ABC studio/office set up was an eye opener. Technology that I had used in the past has changed so much in the last twenty years, and I really should have been prepared for this. So the overall impression left afterwards was of subdued amazement, though I really don’t know why, as the geek in me recognised the purpose of most all the equipment we were able to look at. Thank you Genevieve for an excellent description of your operation.

Image

The afternoon wrap up was designed to have one look inside ones mind, to see where a particular writing style might lead. But at this point I can only surmise that any style within me, if it does indeed exist, might become a little more apparent as the course progresses

 


') ?>

November 06, 2013 01:24 PM

"; } ?>

Tony Bilbroughwhy?

Why I want to be a journalist?

Never stop learning Allow yourself to stay excited with new technology, and always stay curious.

Espoused by Ian Skidmore, Writer, Journalist, Radio announcer. An all up Great Bloke and friend. 1927 – 2013

So why do I want to better understand the mechanics of Journalism.

Being able to express oneself in a manner that is interesting to an audience, has always fascinated me.

With Toastmasters one meets people who delighted their audience in the spoken tale and might later become successful businessmen, or to a lesser extent, politicians.

The same occurred when making documentaries about the Mining industry. Women who could join together disjointed events of underground operations, or in smelters or refineries, that held a viewing audience in awe, long after leaving the theatre. But writing has remained a short suite. I have wondered for a long time if it were possible to learn the fundamentals of telling a tale in an interesting way, by writing, instead of speaking.

I have followed some great English newspaper writers over the years, and their reporting style always seemed as clear as if they were standing nearby, chatting in a crowded room.

I believe that this short, but intense, course will improve the way I gather the information, to better make a story come alive in the minds of others.

Communication is the cornerstone of all replicating Matter, and Mankind is, of course, an integral part of this process. And in order to communicate fully, one must understand how the communication process works.

Having written a number of training manuals, which used paper, the thought occurred that it would be interesting to move on into the digital age aligned to twitter, Facebook and G+ as Digital News.

Corinda November 2013


') ?>

November 06, 2013 04:59 AM

"; } ?>

November 04, 2013

Tony BilbroughInnocence in digital

 

 

baggins

This blog is a tribute to Ian Skidmore, who in his late 70′s decided to share on his wonderful journalistic skills with all those of a similar age staggering into the digital era. Two weeks ago he died, and with that a great path of discovery ended for Skidmore’s Island.

Now I would like to tread gently into this digital age by recording some of the feelings and events experienced,  as a tribute to the people who have helped shape so many different parts of my life.

As yet I have no idea how to link this Blog to my CitizenJ class, and have just found out that Blogging itself is already on the way to being superseded by other Social Media, such as Twitter, Facebook and G+

Now I must leave this blog and prepare for the course.


') ?>

November 04, 2013 12:25 AM

"; } ?>

October 24, 2013

Ben MartinOpen Logic Sniffing




The first trick was to work out how to use triggers in a simple way. Which for me was if I find a 'read' command (byte=3) on the MOSI then that's a trigger. I've told the bone to clock the SPI back to 5khz and the sniffer to run at 10khz which gives me about 2.46 seconds of capture time. So I will surely see two read/write iterations one second apart.

I haven't worked out how to tell the software to ignore lines 1,2,4,5 so they don't show up as noise. So for now those physical snouts are explicitly grounded. After running the SPI analyser in mode 0 I get the below. The only MISO use is on byte 4 where the old value of 0x1 is read from the EEPROM. The activity on the right is setting write enable (0x6) writing the new value (0x2) and then write disable (0x4). Handy to see that chip select is held and released between those three write related tasks as the enable/disable have to be single byte commands where CS is dropped before they are effective.


The conversation is available in byte format in the analyse window shown below. Here you can see the read value go from 3 to 0 to 1. This is because I triggered on the first read, and the first read got back 3 because that is where I stopped the test program last time (ie, it left 3 at the nominated EEPROM address).


Now to get sigrok to have a sniff too.') ?>

October 24, 2013 03:35 AM

"; } ?>

October 10, 2013

Blue HackersWorld Mental Health Day

World Mental Health Day, a yearly item of awareness on the agenda since 1992. A few links: On this day, I would like to draw your attention to an article (in the Vancouver Sun) this week on Dr Gary Greenberg, about the American Psychiatric Association‘s Diagnostic and Statistical Manual of Mental Disorders (DSM), the leading authority on mental health diagnosis and research. This document is used in the US, (Canada?) and UK for assessment/diagnosis. Dr Greenberg makes the point that in recent times in particular, the number of classified “disorders” has skyrocketed, in general but also in particular in the realm of young children. A small child having a temper tantrum can now be classified as a disorder! This in itself is of course already a problem. Obviously, not diagnosing something is detrimental. But from my perspective, lowering the bar too far and casting the net too wide has the potential to do a great deal of harm to the wellbeing of lots of people. I’d suggest that beyond not being helpful, it’s counterproductive. Dr Greenberg also notes that with DSM regarded as authoritative, and diagnosis increasingly resulting in medication, the problem is exacerbated. When other organisations use DSM diagnoses as a reference point for policy, things go bad. Take for instance the forced medication of children based on ADHD diagnoses – it’s forced because the medication is a prerequisite for schools accepting them. Of course there will be kids with issues that merit some form of support and treatment. But you can see how the aforementioned trail from DSM to school authorities forces the child on medication, even though medication might not be the (most) appropriate avenue. Medicating everything is not the way – life is not a disease, and what’s considered “normal” has a pretty broad spectrum. Demanding narrow conformity and medicating everything outside that boundary is scary. On the other hand, other support mechanisms (including education) hinges on diagnoses as well – so when a threshold is effectively raised, this might remove some people from the medication realm, but it also removes other support. So there it goes wrong again. Complicated matters.') ?>

October 10, 2013 12:41 AM

"; } ?>

October 06, 2013

Daniel DevineThe Mark of Un-Authenticity

With 3D objects getting increasingly easy to scan and print, and print quality increasing, I think accusations of counterfeiting and unlicensed production are going to start coming thick and fast. Should we start marking 3D printed objects clearly as non-original, different, alternative, un-official to help defend against these claims?

/blog_images/unofficial-mark.png

An "Unofficial" mark.

Read more…

') ?>

October 06, 2013 04:32 AM

"; } ?>

October 03, 2013

Blue HackersCould Diet Sodas be Making You Depressed?

We now know that sugary sodas aren’t good for our bodies but a recent study has found that they may not be good for our mind or our mood either. Find out more about the link between consumption of sweetened drinks and depression.

') ?>

October 03, 2013 10:33 PM

"; } ?>

September 26, 2013

Daniel Devinelinux.conf.au 2014 - I'm Going

Last week (or there-abouts) I registered for linux.conf.au 2014, booked flights and accommodation and then gave away literally all my savings to pay for it all. The conference itself is cheap, it's all the extra things that really cost money.

I have a feeling I will particularly enjoy LCA2014 because I'm keen to chase down some people who have some similar interests. Python, private email and federated social networking are on my agenda. I will also get my act together and take part in the keysigning (which I assume will happen) because it's been shown that building a web of trust is now more important than ever.

There are probably some other things I will do in Perth while I am there. Hopefully I can get some savings together and do the extra little things that make the trip worth it.

') ?>

September 26, 2013 07:37 AM

"; } ?>

September 13, 2013

Daniel DevineAnonymous Authentication With Public Key Cryptography

I was looking at BrowserID which is an awesome decentralised authentication system that allows anybody with an email address to authenticate with a single identity. However, what if you don't want the service you are authenticating with to know your email address? What if you want several "single identities"? What if you want to be anonymous?

I've come up with a system which should help separate your personal identity from the account & data.

Read more…

') ?>

September 13, 2013 02:27 AM

"; } ?>

September 05, 2013

Pat NicholsBlack Dog

How to say it? Churchill’s black dog has come home. I know it. No denying it. No I don’t want to talk about it. Why mention it? Admitting it is the first step to overcoming it. Do I need your help? Probably. Will I accept it if offered? Almost certainly not. It’s a personal thing. I’ve beaten it before and will again. Right now is hard. Every day is hard.

Why mention it here? I know it is common, more common than you think. Look across the room. How many people can you see? 10? 20? 30? Think of the statistics. Depressive disorder affects 3.4% of adult men and 6.8% of adult women over any given 12 month period (Australian Bureau of Statistics figures for 1998) . In Australia. With people like us. See those people when you look up? Odds on, one of those people is suffering right now. Silently.

If it is you, don’t give up. I’ve beaten it before and will again. Don’t become a statistic. Don’t seek a permanent solution to a temporary problem. Can you beat it on your own? Maybe. Why risk it? Seek professional help. I’m doing it today.

Stigma kills. How many times can one have a migraine or diarrhea or any other malady? Admit the truth. It makes it easier to bear. Will it affect my promotion chances in the future and potential pay? Probably. That’s the stigma.

I’d sooner get better than promoted anyway.

Yesterday was my 8th year anniversary at my current employer. Management has known of my illness most of that time. In those eight years I've been hospitalised once. And promoted twice.

') ?>

September 05, 2013 10:57 PM

"; } ?>

July 29, 2013

Ben MartinGDrive mounting released!


So, since the previous posts have been about the GDrive API and various snags I ran into along the way, this post is about how you can actually use this stuff.

Firstly run up the ferris-capplet-auth app and select the GDrive tab. I know I should overhaul the UI for this auth tool, but since it's mostly only used once for a web service I haven't found the personal desire to beautify it. So inside the GDrive tab, clicking on the "Authenticate with GDrive" button opens a dialog (should become a wizard), the first thing to do as it tells you is visit the console page on google to enable the GDrive API. Then click or paste the auth link in the dialog to allow libferris to get its hands on your data. The auth link goes to google and tells you what libferris is wanting. When you OK that you are given a "code" that you have to copy and paste back into the lower part of the auth capplet this dialog window. Then OKing the dialog will have libferris get a proper auth token from google and you are all set.

So to get started the below command will list the contents of your GDrive:

$ ferrisls google://drive


To put a file up on there you can do something like;

$ date >/tmp/sample.txt
$ ferriscp /tmp/sample.txt google://drive


And you can get it back with cat if you like. Or ferriscp it somewhere else etc.

$ fcat google://drive/sample.txt
Mon Jul 29 17:21:28 EST 2013


If you want to see your shares for this new sample file use the "shares" extended attribute.

$ fcat -a shares google://drive/sample.txt
write,monkeyiq

The shares attribute is a BINEBO (Bytes In Not Equal Bytes Out). Yay for me coining new terms! This means that what you write to it is not exactly what you will get when you read back from it. The handy part of that is that if you write an email address into the extended attribute, you are adding that person to the list of folks who can write to the file. Because I'm using libferris without FUSE and bash doesn't understand libferris URLs, I have to use ferris-redirect in the below command. You can think of ferris-redirect like the shell redirection (>) but you can also supply the extended attribute to redirect data into with (-a). If I read back the shares extended attribute I'll see a new entry in there. Google will have sent a notification email to my friend with a link to the file for me also.

$ echo niceguy@example.com \
   | ferris-redirect -a shares google://drive/sample.txt
$ fcat -a shares google://drive/sample.txt
write,monkeyiq
write,Really Nice Guy

I could also add some hookup to your "contacts" to this, so your evolution addressbook nick names or google contacts could be used to lookup a person. In this case, with names changed to protect the innocent etc, so hypothetically google thinks the name for that email address is Really Nice Guy because he is in my contacts on gmail.

All of this extends to other virtual filesystem that libferris supports. You can "cp" from your scanner or webcam or a tuple of a database directly to google drive if that floats your boat.

I've already had a bit of a sniff at the dropbox API and others, so you might be able to bounce data between clouds in a future release.

') ?>

July 29, 2013 07:47 AM

"; } ?>

July 27, 2013

Ben MartinThe new google://drive/ URL!


The new OAuth 2.0 standard is so much easier to use than the old 1.0 version. In short, after being identified and given the nod once by the user, in 2.0 you have to supply a single secret, in 1.x you have to use per message nonce, create hashes, send the key and token, etc. The main drawback of 2.0 is that you have to use TLS/SSL for each request to protect that single auth token. A small price to pay, as you might well want to protect the entire conversation if you are doing things that require authentication anyway.

A few caveats of the current implementation: mime types on uploaded files are based on file name sniffing. That is because the upload you might be using cp foo.jpg google://drive and the filesystem copies the bytes over. But GDrive needs to know the mimetype for that new File at creation time. The GDrive PATCH method doesn't seem to let you change the mimetype of a file after it has been sent. A better solution will involve the cp code prenotifying the target location so that some metadata (mimetype) can be prefetched form the source file if desired. That would allow full byte sniffing to be used.

Speaking of PATCH, if you change metadata using it, you always get back a 200 response. No matter what. Luckily you also get back a JSON file string with all the metadata for the file you have (tried to) updated. So I've made my PATCH caller code to ignore the HTTP response code compare the returned file JSON to see if the changes actually stuck or not. If a value isn't set how it is expected my PATCH returns an exception. This is in contrast to the docs for the PATCH method which claims that the file JSON is only returned "if successful".

Oh yeah, one other tiny thing about PATCH. If you patch the description it didn't show up in Firefox for me until I refreshed the page. Changing the title does update the Firefox UI automatically. I guess the sidepanel for description hasn't got the funky web notification love yet.

There are two ways I found to read a directory, using files/list and children/list. Unfortunately the later, while returning only the direct children of a folder, also only returns a few pieces of information for those children the most interesting being the child's id. On the other hand the files/list gives you almost all the metadata for each returned File. So on a slower link, one doesn't need thinking music to work out if one round trip or two are the desired number. The files/list also returns metadata for files that have been deleted, and files which other's have shared with you. It is easy to set a query "hidden = false and trashed = false" for files/list to not return those dead files. Filtering on the server exclusively for files that you own is harder. There is a query alias sharedWithMe but no OwnedByMe to return the counter set. I guess perhaps "not sharedWithMe" would == OwnedByMe.

Currently I sort of ignore the directory hierarchy that files/list returns. So all your drive files are just in google://drive/ instead of subdirs as appropriate. I might leave that restriction in the first release. It's not hard to remove, but I've been focusing on upload, download, and metadata change.

Creating files, updating metadata, and downloading files from GDrive all work and will be available in the next libferris release. I have one other issue to cleanup (rate limiting directory read) before I do the first libferris release with gdrive mounting.

Oh and big trap #2 for the young players. To actually *use* libferris on gdrive after you have done the OAuth 2.0 "yep, libferris can have access" you have to go to code.google.com/apis/console and enable drive API for your account otherwise you get access denied errors for all. And once you goto the console and do that, you'll have to OAuth again to get a valid token.

A huge thank you for those two contributed to the ferris fund raising after my last post proposing mounting Google Drive!') ?>

July 27, 2013 10:21 AM

"; } ?>

July 23, 2013

Ben MartinMounting Google Drive?


One plus of all this is that the index & search in libferris will then extend it's claws to GDrive as well as desktop files. As I&S is built on top of the virtual filesystem and uses the virtual filesystem to return search results.

For those digging around maybe looking to do the same thing, see the oauth page for desktop apps, and the meat seems to be in the Files API section. Reading over some of the API, the docs are not too bad. The files.watch call is going to take some testing to work out what is actually going on there. I would like to use the watch call is for implementing "tail -f" semantics on the client. Which is in turn most useful with open(append) support. The later I'm still tracking down in the API docs, if it is even possible. PUT seems to update all the file, and PATCH seems very oriented towards doing partial metadata updates.

The trick that libferris uses of exposing the file content through the metadata interface seems to be less used by other tools. With libferris, using fcat and the -a option to select an extended attribute, you can see the value of that extended attribute. The content extended attribute is just the file's content :)

$ date > df.txt
$ fcat -a name df.txt
df.txt
 $ fcat -a mtime-display df.txt
13 Jul 23 16:33
$ fcat -a content df.txt
Tue Jul 23 16:33:51 EST 2013

Of course you can leave out the "-a content" part to get the same effect, but anything that is wanting to work on an extended attribute will also implicitly be able to work on the file's byte content as well with this mechanism.

If anyone is interested in hacking on this stuff (: good ;) patches accepted. Conversely if you would like to be able to use a 'cp' like tool to put and get files to gdrive you might consider contributing to the ferris fund raising. It's amazing how much time these Web APIs mop up in order to be used. It can be a fun game trying to second guess what the server wants to see, but it can also be frustrating at times. One gets very used to being able to see the source code on the other side of the API call, and that is taken away with these Web thingies.

Libferris is available for Debian Hard Float and Debian armel soft floating point. I've just recently used the armhf to install ferris on an OMAP5 board. I also have a build for the Nokia N9 and will update my Open Build Service Project to roll fresh rpms for Fedora at some stage. The public OBS desktop targets have fallen a bit behind the ARM builds because I tend to develop on and thus build from source on desktop.

') ?>

July 23, 2013 06:55 AM

"; } ?>

July 21, 2013

Ben MartinLike a Bird on a Wire(shark)...


I noticed a little while ago that cp to vimeo://upload didn't work anymore. I had earmarked that for fixing and recently got around to making that happen. It's always fun interacting with these Web APIs. Over the time I've found that Flickr sets the bar for well documented APIs that you can start to use if you have any clue about making GET and POST etc. At one stage google had documented their API in a way that you could never use it. I guess they have fixed that by now, but it did sort out the pretenders from those two could at least sniff HTTP and were determined to win. The vimeo documentation IIRC wasn't too bad when I added support to upload, but the docs have taken a turn for the worst it seems. Oh, one fun tip for the young players, when one API call says "great, thanks, well done, I've accepted your call" and then a subsequent one says "oh, a strange error has happened", you might like to assume that the previous call might not have been so great after all.

So I started tinkering around, adding oauth to the vimeo signup, and getting the getTicket call to work. Having getTicket working meant that my oauth signed call was accepted too. I then was then faced with the upload of the core data (which is normally done with a rather complex streaming POST), and the final I'm done, make it available call. On vimeo that last call seems to be two calls now, first a VerifyChunks call and then a Complete call.

So, first things first. To upload you call getTicket which gives you an endpoint that is an HTTP URL to send the actual video data to, as well as an upload ticket to identify the session. If you try to post to that endpoint URL and the POST converts the CGI parameters using multipart/form-data with boundaries into individual Content-Disposition: form-data elements, you loose. You have to have the ticket_id in the URL after the POST text in order to upload. One little trap.

So then I found that verifyChunks was returning Error 709 Access to the chunk list failed. And that was after the upload had been replied to with "OK. Thanks for the upload.". Oddly, I also noticed that the upload of video data would hang from time to time. So I let the shark out of the pen again, and found that vimeo would return it's "yep were done, all is well" response to the HTTP POST call at about 38-42kb into the data. Not so great.

Mangling the vimeo.php test they supply to upload with my oauth and libferris credentials I found that the POST had a header Expect: 100-continue. Right after the headers were sent vimeo gave the nod to continue, and then the POST body was sent. I assume that just ploughing through and giving the headers followed by the body confused the server end and thus it just said "yep, ok, thanks for the upload" and dropped the line. Then of course forgot the ticket_id because there was no data for it, so the verifyChunks got no chunk list and returned the strange error it did. mmm, hindsight!

So I ended up converting from the POST the newly available PUT method for upload. They call that their "streaming API" even though you can of course stream to a POST endpoint. You just need to frame the parameters and add the MIME tailer to the POST if you want to stream a large file that way. Using PUT I was then able to verify my chunks (or the one single chunk in fact) and the upload complete method worked again.

In the end I've added oauth to my vimeo mounting, many thanks to the creators of the QOAuth library!
') ?>

July 21, 2013 12:10 AM

"; } ?>

July 20, 2013

Blue HackersWhy Anti-Authoritarians are Diagnosed as Mentally Ill

In Bruce Levine’s career he as spoken with hundreds of people diagnosed with ODD & ADHD. An astonishing number of these people are also anti-authoritarians.

') ?>

July 20, 2013 01:29 AM

"; } ?>

July 11, 2013

Blue HackersAsperger’s and IT: Why my prejudices are great for your business | The Register

[...] my prejudices matter because what I recommend to my clients must be the absolute best solution for their needs. Point blank, no excuses, period. [...] I will design for you, and assist you in implementing, the best IT infrastructure that I can. This is what you pay me to do. Just please don’t be too upset if I forget little Johnny’s name.

') ?>

July 11, 2013 02:03 AM

"; } ?>

June 08, 2013

Ben MartinBeagleBone Black: Walking the dog.




On an unrelated purchase, I got a small 1.8 inch TFT display that can do 128x160 with a bunch of colours using the st7735 chip. That's shown above running the qtdemo on the framebuffer. Of course, an animation might serve to better show that off. The display was on sale for $10 and so it was then on it's way to me :) My original plan was to drive that from an Arduino... Looking around I noticed that Matt Porter had generously contributed a driver to run the st7735 over SPI from the Linux kernel. The video of him talking at ELC about this framebuffer driver was also very informative :) It seems the same TFT can be run from the Raspberry or Beagle series of hardware.

The wiring for the panel I got was a bit different than the adafruit one that Matt used. But once you have the pinouts its not so hard to figure out. I've currently left the 5V rail unconnected on my TFT. On the BeagleBone Black the HDMI output captures a whole bunch of pins when it starts. Unfortunately some of those pins are needed for the little TFT. One might be able to reroute the SPI to the other bus or mux the pins differently to get around that and have HDMI and the TFT at once. But I wanted to get the TFT going to see if/how it worked before changing the pins.

I had found some info on putting a line in eEnv.txt to stop the HDMI cape from loading but that didn't work for me. On my board I saw that in /sys/devices/bone_capemgr.9/slots the HDMI was the 5th cape. When I first echoed "-5" into the slots file to unload that cape the kernel gave a backtrace. If I did the same on a freshly booted bone it would cleanly remove the HDMI cape though. So something was using the HDMI cape driver before which didn't want to be removed.

With the HDMI cape unloaded the next step is to load a "firmware" file that reserves the pins that the st7735fb driver wants to use. Since I used the same pins on the bone as the adafruit display wants I could just use the below.

echo cape-bone-adafruit-lcd-00A0  >  /sys/devices/bone_capemgr.9/slots

A dmesg showed that a new framebuffer device fb0 had come into existence.

[   85.280471] bone-capemgr bone_capemgr.9: slot #6: Requesting firmware 'cape-bone-adafru-00A0.dtbo' for board-name 'Override Board Name', version '00A0'
[   85.284645] bone-capemgr bone_capemgr.9: slot #6: dtbo 'cape-bone-adafru-00A0.dtbo' loaded; converting to live tree
...
[   86.235178] fb0: ST7735 frame buffer device,
[   86.235178]  using 40960 KiB of video memory
[   86.236687] bone-capemgr bone_capemgr.9: slot #6: Applied #5 overlays.

After a bunch of searching around trying various things, I found that prescaling in mplayer can display to the framebuffer:

# mplayer -ao null  -vo fbdev2:/dev/fb0 -x 128 -y 160 -zoom  ArduSat_Open_Source_in_orbit.mp4

The qtdemo also runs "ok" by executing the below. I say ok because it obviously expects a higher resolution display than 128x160.... qtdemoE -qws

It is tempting to have two screens and add a touch sensitive film to them. With a QML/QtQuick/TodaysRebrand^TM interface the GUI should work well and be flickable to many screens.

A great hack I look forward to is running a 32x16 LED DMD using a deferred rendering framebuffer driver like the st7735fb does. I see the evil plan now, release the BeagleBone Black for $45 and draw more C/C++ programmers to being kernel hackers rather than userland ones :)

') ?>

June 08, 2013 03:18 AM

"; } ?>

May 29, 2013

Ben MartinFontForge: Rounding out the platforms for binary distrubution


Now after another stint I have FontForge running under 32bit Windows 7. So finally I had a use for that other OS sitting on my laptop for all this time ;) The first time I got it to run it looked like below. I created a silly glyph to make sure that bezier editing was responsive...


The plan is to have the theme in use so nice modern fonts are used in the menu, and other expected tweaks before making it a simple thing to install on Windows.

One, IMHO, very cool thing I did to get all this happening was to use the OpenSUSE Build System (OBS) to make the binaries. There are some DLL and header file drops for X floating around, but I tend to like to know where libraries that are being linked into the program have come from. Call me old fashioned. So in the process I cross compiled chunks of X Window for Windows on the OBS servers. My OBS win32 support repository contains these needed libraries, right through cairo and pango using the Xft backends to render.

There is a major a schism there: if you are porting a native GTK+2 application over to win32, then you will naturally want to use the win32 backends to cairo et al and have a more native win32 outcome. For FontForge however, the program wants to use the native X Window APIs and the pango xft backend. So you need to be sure that you can render text to an X Window using pango's xft backend to make your life simpler. That is what the pangotest project I created does, just put "hello world" on an X Window using pango-xft.

A big thanks to Keith Packard who provided encouragement at LCA earlier this year that my crazy cross compile on OBS plan should work. I had a great moment when I got xeyes to run, thinking that things might turn out well after the hours and hours trying to cross compile the right collection of X libraries.

I should also mention that I'm looking for a bit of freelance hacking again. So if you have an app you want to also run on OSX/Windows then I might be the guy to make that happen! :) Or if you have cool C/C++ work and are looking to expand your team then feel free to email me.

') ?>

May 29, 2013 10:01 AM

"; } ?>

May 28, 2013

Anthony TownsParental Leave

Two posts in one month! Woah!

A couple of weeks ago there was a flurry of stuff about the Liberal party’s Parental Leave policy (viz: 26 weeks at 100% of your wage, paid out of the general tax pool rather than by your employer, up to $150k), mostly due to a coalition backbencher coming out against it in the press (I’m sorry, I mean, due to “an internal revolt”, against a policy “detested by many in the Coalition”). Anyway, I haven’t had much cause to give it any thought beforehand — it’s been a policy since the 2010 election I think; but it seems like it might have some interesting consequences, beyond just being more money to a particular interest group.

In particular, one of the things that doesn’t seem to me to get enough play in the whole “women are underpaid” part of the ongoing feminist, women-in-the-workforce revolution, is how much both the physical demands of pregnancy and being a primary caregiver justifiably diminish the contributions someone can make in a career. That shouldn’t count just the direct factors (being physically unable to work for a few weeks around birth, and taking a year or five off from working to take care of one or more toddlers, eg), but the less direct ones like being less able to commit to being available for multi-year projects or similar. There’s also probably some impact from the cross-over between training for your career and the best years to get pregnant — if you’re not going to get pregnant, you just finish school, start working, get more experience, and get paid more in accordance with your skills and experience (in theory, etc). If you are going to get pregnant, you finish school, start working, get some experience, drop out of the workforce, watch your skills/experience become out of date, then have to work out how to start again, at a correspondingly lower wage — or just choose a relatively low skill industry in the first place, and accept the lower pay that goes along with that.

I don’t think either the baby bonus or the current Australian parental leave scheme has any affect on that, but I wonder if the Liberal’s Parental Leave scheme might.

There’s three directions in which it might make a difference, I think.

One is for women going back to work. Currently, unless your employer is more generous, you have a baby, take 16 weeks of maternity leave, and get given the minimum wage by the government. If that turns out to work for you, it’s a relatively easy decision to decide to continue being a stay at home mum, and drop out of the workforce for a while: all you lose is the minimum wage, so it’s not a much further step down. On the other hand, after spending half a year at your full wage, taking care of your new child full-time, it seems a much easier decision to go back to work than to be a full-time mum; otherwise you’ll have to deal with a potentially much lower family income at a time when you really could choose to go back to work. Of course, it might work out that daycare is too expensive, or that the cut in income is worth the benefits of a stay at home mum, but I’d expect to see a notable pickup in new mothers returning to the workforce around six months after giving birth anyway. That in turn ought to keep women’s skills more current, and correspondingly lift wages.

Another is for employers dealing with hiring women who might end up having kids. Dealing with the prospect of a likely six-month unpaid sabbatical seems a lot easier than dealing with a valued employee quitting the workforce entirely on its own, but it seems to me like having, essentially, nationally guaranteed salary insurance in the event of pregnancy would make it workable for the employee to simply quit, and just look for a new job in six month’s time. And dealing with the prospect of an employee quitting seems like something employers should expect to have to deal with whoever they hire anyway. Women in their 20s and 30s would still have the disadvantage that they’d be more likely to “quit” or “take a sabbatical” than men of the same age and skillset, but I’m not actually sure it would be much more likely in that age bracket. So I think there’s a good chance there’d be a notable improvement here too, perhaps even to the point of practical equality.

Finally, and possibly most interestingly, there’s the impact on women’s expectations themselves. One is that if you expect to be a mum “real soon now”, you might not be pushing too hard on your career, on the basis that you’re about to give it up (even if only temporarily) anyway. So, not worrying about pushing for pay rises, not looking for a better job, etc. It might turn out to be a mistake, if you end up not finding the right guy, or not being able to get pregnant, or something else, but it’s not a bad decision if you meet your expectations: all that effort on your career for just a few weeks pay off and then you’re on minimum wage and staying home all day. But with a payment based on your salary, the effort put into your career at least gives you six month’s worth of return during motherhood, so it becomes at least a plausible investment whether or not you actually become a mum “real soon now” or not.

According to the 2010 tax return stats I used for my previous post, the gender gap is pretty significant: there’s almost 20% less women working (4 million versus 5 million), and the average working woman’s income is more than 25% less than the average working man’s ($52,600 versus $71,500). I’m sure there are better ways to do the statistics, etc, but just on those figures, if the female portion of the workforce was as skilled and valued as the male portion, you’d get a $77 billion dollar increase in GDP — if you take 34% as the proportion of that that the government takes, it would be a $26 billion improvement to the budget bottom line. That, of course, assumes that women would end up no more or less likely to work part time jobs than men currently are; that seems unlikely to me — I suspect the best that you’d get is that fathers would become more likely to work part-time and mothers less likely, until they hit about the same level. But that would result in a lower increase in GDP. Based on the above arguments, there would be increase the number of women in the workforce as well, though that would get into confusing tradeoffs pretty quickly — how many families would decide that a working mum and stay at home dad made more sense than a stay at home mum and working dad, or a two income family; how many jobs would be daycare jobs (counted as GDP) in place of formerly stay at home mums (not counted as GDP, despite providing similar value, but not taxed either), etc.

I’m somewhat surprised I haven’t seen any support for the coalition’s plans along these lines anywhere. Not entirely surprised, because it’s the sort of argument that you’d make from the left — either as a feminist, anti-traditional-values, anti-stay-at-home-mum plot for a new progressive genderblind society; or from a pure technocratic economic point-of-view; and I don’t think I’ve yet seen anyone with lefty views say anything that might be construed as being supportive of Tony Abbott… But I would’ve thought someone on the right Bolt or Albrechtsen or Australia’s leading libertarian and centre-right blog or the Liberal party’s policy paper might have canvassed some of the other possible pros to the idea rather than just worrying about the benefits to the recipients, and how it gets paid for. In particular, the argument for any sort of payment like this shouldn’t be about whether it’s needed/wanted by the recipient, but how it benefits the rest of society. Anyway.

') ?>

May 28, 2013 04:01 PM

"; } ?>

May 19, 2013

Ben MartinSome amateur electronics: hand made 8x8 LED matrix

LeoStick in the top right of picture. Thanks again to Freetronics for giving those little gems away at LCA!

<iframe allowfullscreen="" frameborder="0" height="281" mozallowfullscreen="" src="http://player.vimeo.com/video/66445147" webkitallowfullscreen="" width="500"></iframe>
8x8 LED matrix, two 595 shifties and a ULN2003 current sink from Ben Martin on Vimeo.

The LEDs to light on a row are selected by a 595 shift register providing power for each row. The resistors are on the far right of the grid leading to that shift register. The cathodes for each individual column are connected together leading to the top of the grid (as seen in the video). Those head over to a uln2003 current sink IC. In the future I'll use either two 2003 chips or one single 2803 (which can do all 8 columns at once) to get the first column to light up too.

The uln2003 is itself controlled by supplying power to the opposite side to select which column's cathodes will be grounded at any given moment. The control of the uln2003 is also done by a 595 shift register which is connected to the row shifty too. The joy of all this is you can pump in the new state and latch the shift registers at once to apply the new row pattern and select which column is lit.

The joy of this design is that I can add 8x8 layers on top at the cost of 8 resistors and one 595 to perform row select.

There are also some still images of the array if you're peaked.

The 595 chips can be had for around 40c a pop and the uln2003 for about 30c. LEDs in quantity 500+ go at around 5-7c a pop.

The code is fairly unimaginative, mainly to see how well the column select works and how detectable it is. In the future I should setup a "framebuffer" to run the show and have a timer refresh the array automatically...

#define DATA   6
#define LATCH  8
#define CLOCK 10  // digital 10 to pin 11 on the 74HC595

void setup()
{
  pinMode(LATCH, OUTPUT);
  pinMode(CLOCK, OUTPUT);
  pinMode(DATA, OUTPUT);
}

void loop()
{
  int i;
  for ( i = 0; i < 256; i++ )
  {
    int col = 1;
    for( col = 1; col < 256; col <<= 1 )
    {
      digitalWrite(LATCH, LOW);
      shiftOut(DATA, CLOCK, MSBFIRST, col );
      shiftOut(DATA, CLOCK, MSBFIRST, i   );
      digitalWrite(LATCH, HIGH);
    }

    delay(20);
  }
}




') ?>

May 19, 2013 11:59 AM

"; } ?>

May 15, 2013

Blue HackersAutism/Aspergers and Online Learning

Online learning environments present an opportunity to better serve autistic students that teachers can learn to use effectively.
') ?>

May 15, 2013 12:20 PM

"; } ?>

May 13, 2013

Daniel DevineNew Site New Server

DDEVnet.net is now running on a new server, in Australia. I also took the opportunity to replace my personal website, HTTP server and change my mail setup. Rather than coming out scarred, I've come through enlightened and more enthusiastic than ever.

Read more…

') ?>

May 13, 2013 02:57 AM

"; } ?>

May 08, 2013

Elspeth ThorneMemory, and the lack thereof.


I used to be able to recall events that happened with incredible clarity, as if they were happening right now.

Today? I can't remember what year I graduated my engineering degree, or what year my uncle died. I can work it out - but I don't remember the events. I can barely remember the first time I met my husband, our first kiss, or our wedding. What I do remember are like photographs - single frames - instead of being able to remember the sounds, the touch, the tastes, the smells, the feelings and what I saw as a seamless whole.

And it's more than just not being able to remember long-term memories, things that happened years ago. It's being unable to remember what happened last week, last month, needing to visit a place a few times before remembering where it is and what it looks like.

What I do remember is having near-perfect recall of places, and how to get there; or being able to remember what I did last week; or being able to remember people's names and faces; or being able to remember what year important things happened, and in what sequence.

I remember being able to do this things as recently as ... well, after visiting linkedin, I know it was in 2009. I couldn't remember the year, but I could remember where I lived and worked.  Kind of. So I could work out when it was. That's fairly typical for me, these days.

I have improved a little over the past year; I no longer have quite so many issues with recent events. According to some friends, early in 2011 it was truly frightening to see how quickly I forgot things; how much I just didn't remember day-to-day. Now, if I've met someone, I'll at least remember I *have* met them, even if I don't have a clue of their name or anything else about them, although I do sometimes recall names. I usually remember if I've made appointments, and often I'll get the day and time right, although I still occasionally double-book myself. This is nowhere near my previous levels, but ... it's better than it was, when I had to set myself alarms three times a day to remember to take my pills, and write notes to myself so I'd know I'd eaten.

Even at my current level of improvement, it is still incredibly distressing to have lost so much of my life. So much of the things I know I used to know, I no longer do. So many skills. It reduces me to tears if I think about it too much.

I have no idea why this has happened to me, or how to fix it. I have had more pressing and immediate problems to deal with. Those are, however, mostly either under control or uncontrollable, so I can start dealing with this particular issue.

I'm going to start by sending this entry to my GP, and beginning a document that chronicles my major life events, so I can at least put things in the right order in my mind. I can only hope that some of my memories are recoverable, and not gone forever.') ?>

May 08, 2013 10:44 AM

"; } ?>

Ben MartinSave Ferris: Show some love for libferris...

libferris from KDE, also the ability to get at libferris from plasma.

I've been meaning to update the mounting of some Web services like vimeo for quite some time. I'd also like to expand to allow mounting google+ as a filesystem and add other new Web services.

In order to manage time so that this can happen quicker, I thought I'd try the waters with a pledgie. I've left this open ended rather than sticking an exact "bounty" on things. I had the idea of trying a pledgie with my recent investigation into the libferris indexing plugins on a small form factor ARM machine. I'd like to be able to spend more time on libferris, and also pay the rent while doing that, so I thought I'd throw the idea out into the public.

If you've enjoyed the old tricks of mounting XML, Berkeley DB, SQLite, PostgreSQL and other relational databases, flickr, google docs, identica, and others and want to see more then please support the pledgie to speed up continued development. Enjoy libferris!

Click here to lend your support to: Save Ferris: Show some love for libferris and help kick it

') ?>

May 08, 2013 08:57 AM

"; } ?>

May 03, 2013

Ben MartinIndexing on limited hardware... what to do


For testing purposes I built a fairly tiny index of only 130k files. An interesting test case is looking for specific files which have paths that match against a regular expression and returns a fairly small chunk of results. For this case, about 115 resulting files using a four character substring search as the regex. These are a common query for looking for files when you don't recall the exact ordering of the directory names or where a directory was. Small number of results, regex to pick them.

The memory mapped index implementation (boostmmap) uses boost IPC and multi indexed collections created in memory mapped files to maintain the index. The index has also a digram index for each URL allowing regular expressions to resolve through index rather than needing evaluation against full URLs.

The SQLite index is fairly vanilla and doesn't include many customizations for sqlite. Whereas the PostgreSQL index implementation does use many of the features specific to that database. Neither the SQLite or boostmmap indexes in the public libferris repo attempt to do any compression on URL strings or the like.

A fairly basic index on 130k files is about 80mb using either memory mapped files or SQLite. Caches are cleared by echo 3 > drop_caches. Using an odroid-u2 with emmc flash, on a cold cache the SQLite index comes out about 10% faster than the boostmmap for a query finding 115 files. Turning off the regex prefilter index in the boostmmap makes it 10% slower again. This is a trade off, a very fast CPU and a disk with great file location and single extents will show less or no difference with the prefilter as reading 80mb from disk will take less time and the CPU can run 130k regexes very quickly. The prefilter requited only 124 regex evaluations, without the prefilter all 130611 URLs needed a regex evaluation.

The interesting part is with a warm cache the boostmmap is about twice as fast overall as the SQLite index. This is a big difference as the timing is for overall complete run time from the command line, and there is some overhead in starting up the index query itself. As usual, things vary depending on if you are expecting frequent queries (warm cache), have a very fast CPU (regex eval is relatively less costly), or need multiple updaters (SQLite allows it, my memory mapped doesn't).

To then experiment a little further, I brought the ferris clucene plugin into the mix. I disabled the explicit prefilter index on regex code for initial testing, the index became about 70mb and could resolve the query on a cold cache in about 65% the time of the SQLite plugin. On warm cache the clucene was slowest, which is mainly due to the prefilter being disabled and the fallback code making the URL query a WildcardQuery with no pre or postfix to anchor the query on.

Next time around I'll see how speed effective the prefilter index is on clucene. I know it slows down adding documents (you are indexing more), and is larger (I haven't optimized for index size), but it will be interesting to see the performance on the eMMC device for the prefilter.



') ?>

May 03, 2013 06:37 AM

"; } ?>

May 02, 2013

Ben MartinFilesystem Indexing: Taking the reins


I'm currently racing the boost memory mapped index with the SQLite backed index on simple URL queries against the filesystem. This is being done on about 2ghz ARM machines with either 512 or 2048mb of RAM. The boostmmap plugin is of my own design and contains some smarts while executing regular expression matching against unanchored strings (.*foo.*). Unfortunately the boostmmap plugin is not as smart as it could be regarding scattered updates, transactions, and journaling, which slows it down a bit in the index creation phase relative to the SQLite plugin.

The below is a skeleton bash script to get started adding files. Another option is to ssh into the remote host and run find(1) there which can be much faster over network filesystems. The whitelist environment variable is to override which metadata libferris will index. If your index indicates it wants sha and md5 digests, the act of calculating those can dominate indexing time. An explicit whitelist keeps index adding times down with the obvious side effect of limiting what you can use in your queries. Such a limited list of metadata as in the below brings the index closer to what locate provides.

#!/bin/bash

rm -rf /tmp/fidxtmp
mkdir -p /tmp/fidxtmp
cd /tmp/fidxtmp

find /DATA-PATH | split -l 5000
 

export LIBFERRIS_EAINDEX_EXPLICIT_WHITELIST="name,size,mtime"

for if in x*
do
   echo "adding $if..."
   cat "$if" | feaindexadd -v -1 >>/tmp/ferris-index-progress.txt
done


Then you can find all your PDF files for example using the following:

feaindexquery -Z '(url=~pdf)'

The -Z tells libferris not to try to lstat() or resolve URLs to see if they exist currently. Much faster results but at the cost of not weeding out things which might have moved since they were last indexed.

And all the files which have been written this year

feaindexquery -Z '(mtime>=begin this year)'

Unfortunate about needing the quotes as bash wants to do things with naked parenthesis.

Save Ferris! Or just donate to an open source project or organization of your choice if you like the ferris posts.') ?>

May 02, 2013 10:31 AM

"; } ?>

April 30, 2013

Ben Martinlibferris available for debian arm hard float


With the eMMC card on the u2, it is a really enjoyable little server machine to play around with. So far I've done the rudimentary test that XML is mountable as a filesystem and created one or two indexes with the Qt/SQLite index plugin for libferris. Note that in recent releases the sqlite backend is transaction backed which gives a huge performance increase, and on really IO constrained machines this is even more noticeable. This is a little tip for those using QtSQL, transactions are not just for making operations atomic, you may find that the whole show runs faster when it is transaction protected.

If you haven't played with libferris, things are auto mounted where possible, and there are many coreutils like tools to make interacting with ferris simple. The ferris-redirect is like the bash redirection but can write to any filesystem that libferris knows how to mount.

ben@odroidu2:/tmp/test$ cat example.xml
<top>
<node name="foo">bar
</node>
</top>
ben@odroidu2:/tmp/test$ fls example.xml
top
ben@odroidu2:/tmp/test$ fls example.xml/top
foo
ben@odroidu2:/tmp/test$ fcat example.xml/top/foo
bar
ben@odroidu2:/tmp/test$ date | ferris-redirect -T example.xml/top/foo
ben@odroidu2:/tmp/test$ cat example.xml
<?xml version="1.0" encoding="UTF-8" standalone="no" ?>
<top>
  <node name="foo">Tue Apr 30 14:57:26 PDT 2013
</node>
</top>

The above interaction would also work for mounted Berkely DB and other filesystems of course.


I have noticed one of the binary scoped destructors doesn't like the hard float build for whatever reason. This can cause some of the command line tools to not exit gracefully, which is a shame. I can't get a good backtrace for the situation either, which makes tracking it down a nice day long adventure into trial and error.

So something productive has been generated during the last round of jet lag after all!

The goods http://fuuko.libferris.com/debian/debian-armhf/


Save Ferris!

') ?>

April 30, 2013 10:08 PM

"; } ?>

Anthony TownsMessing with taxes

It’s been too long since I did an economics blog post…

Way back when, I wrote fairly approvingly of the recommendations to simplify the income tax system. The idea being to more or less keep charging everyone the same tax rate, but to simplify the formulae from five different tax rates, a medicare levy, and a gradually disappearing low-income tax offset, to just a couple of different rates (one kicking in at $25k pa, one at $180k pa). The government’s adopted that in a half-hearted way — raising the tax free threshold to $18,200 instead of $25,000, and reducing but not eliminating the low-income tax offset. There’s still the medicare levy with its weird phase-in procedure, and there’s still four different tax rates. And there’s still all sorts of other deductions etc to keep people busy around tax time.

Anyway, yesterday I finally found out that the ATO publishes some statistics on how many people there are at various taxable income levels — table 9 of the 2009-2010 stats are the best I found, anyway. With that information, you can actually mess around with the tax rules and see what affect it actually has on government revenue. Or at least what it would have if there’d been no growth since 2010.

Anyway, by my calculations, the 2011-2012 tax rates would have resulted in about $120.7B of revenue for the government, which roughly matches what they actually reported receiving in that table ($120.3B). I think putting the $400M difference (or about $50 per taxpayer) down to deductions for dependants and similar that I’ve ignored seems reasonable enough. So going from there, if you followed the Henry review’s recommendations, dropping the Medicare levy (but not the Medicare surcharge) and low income tax offset, the government would end up with $117.41B instead, so about $3.3B less. The actual changes between 2011-2012 and 2012-2013 (reducing the LITO and upping the tax free threshold) result in $118.26B, which would have only been $2.4B less. Given there’s apparently a $12B fudge-factor between prediction and reality anyway, it seems a shame they didn’t follow the full recommendations and actually make things simpler.

Anyway, upping the Medicare levy by 0.5% seems to be the latest idea. By my count doing that and keeping the 2012-2013 rates otherwise the same would result in $120.90B, ie back to the same revenue as the 2011-2012 rates (though biased a little more to folks on taxable incomes of $30k plus, I think).

Personally, I think that’s a bit nuts — the medicare levy should just be incorporated into the overall tax rates and otherwise dropped, IMO, not tweaked around. And it’s not actually very hard to come up with a variation on the Henry review’s rates that both simplify tax levels, don’t increase tax on any individual by very much, and do increase revenue. My proposal would be: drop the medicare levy and low income tax offset entirely (though not the medicare levy surcharge or the deductions for dependants etc), and set the rates as: 35% for earnings above $25k, 40% for earnings above $80k, and 46.5% for earnings above $180k. That would provide government revenue of $120.92B, which is close enough to the same as the NDIS levy. It would cap the additional tax any individual pays to $2000 compared to 2012-13 rates, ie it wouldn’t increase the top marginal rate. It would decrease the tax burden on people with taxable incomes below $33,000 pa — the biggest winners would be people earning $25,000 who’d go from paying $1200 tax per year to nothing. The “middle class” earners between $35,000 and $80,000 would pay an extra $400-$500; higher income earners between $80,000 and $180,000 get hit between $500 and $2000, and anyone above $180,000 pays an extra $2,000. Everyone earning over about $34,000 but under about $400,000 would pay more tax than if the NDIS were just increased, everyone earning between $18,000 and $34,000 would be better off.

On a dollar basis, the comparison looks like:

Translating that to a percentage of income, it looks like:

Not pleasant, I’m sure, on the dual-$80k families in western Sydney who are just-scraping buy and all, but I don’t think it’s crazy unreasonable.

But the real win comes when you look at the marginal rates:

Rather than seven levels of marginal rate, there’s just three; and none of them are regressive — ie, you stop having cases like someone earning $30,000 paying 21% on their additional dollar of income versus someone earning $22,000 paying 29% on their extra dollar. At least philosophically and theoretically that’s a good thing. In practice, I’m not sure how much of a difference it makes:

There’s spikes at both the $80,000 and $35,000 points which involve 8% and 15% increases in the nominal tax rates respectively, which I think is mostly due to people transferring passive income to family members who either don’t work or have lower paid jobs — if you earn a $90,000 salary better to assign the $20,000 rental income from your units to your kid at uni and pay 15% or 30% tax on it, than assign it to yourself and pay 38%, especially if you then just deposit it back into your family trust fund either way. The more interesting spikes are those around the $20,000 and $30,000 points, but I don’t really understand why those are shaped the way they are, and couldn’t really be sure that they’d smooth out given a simpler set of marginal rates.

Anyway, I kind-of thought it was interesting that it’s not actually very hard to come up with a dramatically simpler set of tax rates, that’s both not overly punitive and gives about the same additional revenue as the mooted levy increase.

(As a postscript, I found it particularly interesting to see just how hard it is to get meaningful revenue increases by tweaking the top tax rate; there’s only about $33B being taxed at that rate, so you have to bump the rate by 3% to get a bump of $1B in overall revenue, which is either pretty punitive or pretty generous before you’re making any useful difference at all. I chose to leave it matching the current income level and rate; longer term, I think making the levels something like $25k, $100k, $200k, and getting the percentages to rounder figures (35%, 40%, 45%?) would probably be sensible. If I really had my druthers, I’d rather see the rate be 35% from $0-$100,000, and have the government distribute, say, $350 per fortnight to everyone — if you earn over $26k, that’s just the equivalent of a tax free threshold of $26k; if you don’t, it’s a helpful welfare payment, whether you’re a student, disabled, temporarily unemployed, retired, or something else like just wanting to take some time off to do charity work or build a business)

') ?>

April 30, 2013 04:13 PM

"; } ?>

April 21, 2013

Elspeth ThorneRandom photographs - March-April 2013.












') ?>

April 21, 2013 11:07 PM

"; } ?>

April 19, 2013

Sarah SmithThe Next Electric Cars

The 1915 Detroit Electric

During that time sadly virtually no progress was made on a automotive propulsion batteries or electric vehicle motive power.  The Electric Vehicle arrived on the scene around the same time as the Internal Combustion power car, and while the EV was supreme in the cities, through accidents of history and industrial conspiracy which you can read about in Edwin Black's "Internal Combustion" the EV passed into the annals of history.

Electric Citicar - Southward Auto Museum, New Zealand

Technologists and even motor companies dabbled in electric vehicles during that 100 years, but saw them as toys and quirky golf-cart size "personal transports", like this "Electric Citicar" made between 1972 & 1978 by Sebring-Vanguard.  Really these sorts of vehicles were new bodies on the old technologies no better than the lead-acid batteries of the Detroit Electric 60 years previous.  They were light-weight experiments instead of real full-blown motor cars.  So called electric "neighbourhood cars" still produced today are still evidence of this lack of progress.

Until now.

Photo of the Porsche 918 from GizMag
The 2013 Porsche 918R Hybrid 770bhp 3s 0-60 (gizmag.com)
Now with crises in air quality in our big cities, the effects of global warming and pollution in general on our planet, and the need for energy security we are seeing a resurgence in electric vehicle technology, and electric vehicles on the production lines of many of the world auto makers.

We are also seeing high-performance vehicles powered by electric motors on our race tracks.  Vehicles like the incredible Porsche 918 Hybrid.  While it does have a petrol power-plant on board as well, the 918 can run on pure battery power, and with advances in batteries (see below) Porsches technology will no doubt be seen in future all electric vehicles.  The point is that the electric power train is already seen by high-performance motor engineers as the way forward.

I drive a Nissan Leaf: a practical all electric passenger sedan with a range of 175km and performance equal to or better than similar sized fossil-fuel powered cars.  The Leaf has no exhaust pipe and produces zero emissions, so as I drive past schools and sidewalk cafes I can be sure I'm not ruining the air quality for my fellow city dwellers.  I don't have to make a regular pilgrimage to buy fuel at gas stations, and any time I go down to the car in my garage its there ready to go, charged up for free courtesy of the solar panels on my roof.  Its a clean machine.

But electric vehicle development is not stopping there, and its interesting to wonder how far behind we are after our 100 year sleep, and how quickly we can catch up.  Look at the speed of the cellphone revolution for examples of the speed of innovation we can expect.

There are a couple of things to be said about this catchup process - first is that electric cars while already good, they are only going to get better.

Motor-sports aficionados already recognise the applicability of the new technologies in EV and hybrid power, so we are seeing a new age where the petrol-head becomes the electron-head, as speed and handling come from improvements in electronic and electric drive technology.

One example is the active steering and suspension on the Porsche which has many new freedoms for designers of high-performance cars due to the flexibility electric drive gives you.  The old CV joint and half-shaft, or conventional differential and driveshaft setup of petrol powered vehicles meant many restrictions on what can be achieved.  Porsche engineers are breaking out of these paradigms with the 918 and setting new performance benchmarks as they do so.

exploded image of in wheel motor and tyre
Protean in wheel motor (cleantechnica.com)

In wheel motors promise huge advances in this area, effectively giving 3 degrees of freedom for suspension geometry design and active (computer controlled electronic) suspension and drive systems.  Protean claim also much improved regenerative braking energy recovery.  While in-wheel motors have a challenge with unsprung weight that is outweighed many times over by the prospects for fully active suspension, steering and drive that comes from an in wheel motor.

graphic interpretation of the micro battery architecture with tiny interleaved cells
University of Illinois interdigitated battery architecture (BBC)

This new battery technology unveiled just days before I write this article comes from the University of Illinois and promises power outputs 10s or 100s of times better than current battery technology.

Update: (thanks Russell!) it was pointed out that the U of Illinois battery is more about power output than watts per size (energy density, or capacity).  Other projects are promising big improvements in EV range from higher battery capacity, such as the 3 x lithium-air chemistry and 10 x aluminium-air projects, like the one from Phinergy (although the latter one is currently not rechargeable).

With that sort of energy density my Nissan Leaf would have over 1000km of range on a battery half the size of the one it currently has.  Countries that want to improve air quality in their cities while keeping up with the bleeding edge of technology in transportation are investing heavily in the R&D of battery technologies.

Fossil-fuel powered cars will still be around for a long time, but their golden age has past now.  No matter how good you make internal combustion engines, they rely on foreign oil and produce pollution.  Yes, there are ethanol and other bio-fuels now, but in practice you have to buy E10 or other mixes with petrol.  In theory diesels and flex-fuels can use 100% bio-fuels, but its not available now and production in quantity appears to be neither economically viable or environmentally sustainable.

Advances in internal combustion powered vehicles show fewer and fewer returns for R&D dollars spent, compared to electric.  There's no flux capacitor or magic catalyst that is going to improve petrol 10s or 100s of times over.  ICE cars are about as good as they can ever be and no promised fuel technologies are coming to change that.

I'm driving my 100% emissions free sun-powered electric vehicle today.  Right now.  No theories, no corn subsidies, all real.

Now the point of all this is that the way ahead for Electric Vehicles is expanding and full of promise.  Every new advance brings large improvements in practical areas like range, and carrying capacity.  When EV's are powered by clean sources like my solar panels the entire personal transport experience is of low impact to our environment.

How far can the Electric Vehicle go?  I'm excited to find out.

') ?>

April 19, 2013 03:50 AM

"; } ?>

March 28, 2013

Ben MartinFontForge: Rolling Type Design with No Save Using Collab


So, No Save is only for the "doing" scripts. I of course want to save the current type at my leisure :)

You might be wondering what a script that creates a glyph might look like. I had to do a bit of trial and error to figure out how to use the scripting API myself. With that in mind, I might roll some of my scripts into the mainline FontForge git repo so others can enjoy the little snippits to base their own scripts on.

Anyway, the following script will load a font and create a new capital "C" glyph. The core of this that wasn't that intuitive to me is that you have to set g.layers[] to the layer you got earlier from g.layers[] or the new contour will not show up in the /tmp/out.sfd file. See the MUST comment for the needed line.

import fontforge

f=fontforge.open("test.sfd")
fontforge.logWarning( "font name: " + f.fullname )

g = f.createChar(-1,'C')
l = g.layers[g.activeLayer]
c = fontforge.contour()
c.moveTo(100,100)
c.lineTo(100,700)
c.lineTo(800,700)
c.lineTo(800,600)
c.lineTo(200,600)
c.lineTo(200,200)
c.lineTo(800,200)
c.lineTo(800,100)
c.lineTo(100,100)
l += c
g.layers[g.activeLayer] = l #### MUST do this for changes to show up

f.save("/tmp/new.sfd")

At first blush I was expecting to do something like
c = layer.createContour()
c.moveTo()
...
c.lineTo()
and then not have to do anything special. At that stage calling f.save() should know about the new contour etc and save. But without setting the g.layers[active] to the layer that contains the contour you will not see it.

Digging into the C code in python.c I see that point, contour etc are all basically abstractions for python use only. When you assign to a layer in the "g.layers[] = l" call, the C function PyFF_LayerArrayIndexAssign() calls PyFF_Glyph_set_a_layer() which uses something like SSFromLayer() to convert the python only data structure (contour or what have you) into a native "c" SplineSet object.

The good news is that with all this mining into python.c I now have some collab sprinkles in there. So when you do "g.layers[] = l" the FontForge in script mode will send updates to the layer off to the server as a collab update message.

The test is quite easy to run. As three consecutive scripts, start the collab server process (collab server remains running, script ends). Next attach to the collab server and update the C glyph, and finally attach to the collab server and grab all its data and save a out.sfd file.

fontforge -script collab-sessionstart.py
fontforge -script collab-sessionjoin-and-change-c.py
fontforge -script collab-sessionjoin-and-save-to-out.sfd.py

The middle script connects to the collab server and makes its changes with the python API and then exits. No Save. To know if the changes made it to the collab server, the last script grabs all the updates etc and builds the "current" font to save into /tmp/out.sfd.

It took a bit of hacking in the python code, but now the little changes to the contour (path) of the C glyph are sent to the server as one would expect.

The python scripts are still out of repo. Since they are interesting in and of themselves I'll likely put them into my fontforge fork as a prestage to having them mainline.

Now to move on to the next thing that needs to be send to the collab server and updated in all clients.

') ?>

March 28, 2013 11:02 AM

"; } ?>

March 12, 2013

Clinton RoyReview: Three Crooked Kings

I must confess to reading this book rather quickly over two nights, so I’m sure some nuances were missed.

Like a lot of Australians, and presumably Queenslanders, I do have a bit of an unhealthy interest in our criminal past. I think I was in grade six when some of the Fitzgerald inquiry was finishing up (or recommendations being handed down, or some such) and getting interested then. Personally, I make a lot of use of our Joh given right to mix metaphors.

Overall I was disappointed by the book. There’s clearly a lot of research gone into it, which is great, and the narrative ties it together reasonably well. There are editing issues and some ham fisted attempts at pop psychology. A glossary for all the colloquialisms would have been useful. The worst thing for me though, is that the main interviewee, Lewis, comes out with nary a red cross against his name. Time and time again Lewis is implicated in the skulduggery of the time, but he denies the worst of it at every turn.

I feel I can’t quite call the book a whitewashing of Lewis’s history yet, as there’s still another half to go, Lewis may well let it all out then and redeem himself. But I also feel the author has let us down by not digging deeper on Lewis. Maybe that was part of the interview deal; or maybe Lewis still has powerful friends.

On a matter that I really should disclose, it appears that family members are named in the book, in none too good a light.


Filed under: books ') ?>

March 12, 2013 08:49 AM

"; } ?>

March 11, 2013

Andrae MuysConfiguring Sesame 2.6.10 data-directory without using System properties


In the openrdf-http-server-servlet.xml file the adunaAppConfig bean (class: info.aduna.app.AppConfiguration) can accept a dataDir property with the value of the data-directory.

I haven't found this documented anywhere, so I hope this helps someone. Probably that person will be me next time I need to configure Sesame.

The fragment in question:

    <bean class="info.aduna.app.AppConfiguration" 
             destroy-method="destroy" id="adunaAppConfig" 
             init-method="init">
        <property name="applicationId" value="OpenRDF Sesame">
        <property name="longName" value="OpenRDF Sesame">
        <property name="version" ref="adunaAppVersion">
        <property name="dataDirName" value="/var/lib/aduna">
    </bean>
') ?>

March 11, 2013 01:51 AM

"; } ?>

Last updated: January 29, 2014 12:01 PM. Contact Brad with problems.