HUMBUG logo


HUMBUGers

Feed: rss rdf opml

April 17, 2014

Ben FowlerBad HDMI cables and CEC

Fig: nasty HDMI cable from Maplin that apparently doesn't support CEC I got myself a Raspberry Pi (a tiny, £30 hacker-friendly computer), and then set it up with RaspXBMC to stream my movie rips off the house file server to my Samsung 6-Series TV. Now, you're supposed to be able to use the TV remote to drive XBMC and it didn't work initially. No matter how hard I tried, I simply could not

April 17, 2014 09:26 PM

April 14, 2014

Adrian SuttonTesting@LMAX – Testing in Live

Previously in the Testing@LMAX series I’ve mentioned the way we’ve provided isolation between tests, allowing us to run them in parallel. That isolation extends all the way up to supporting a multi-tenancy module called venues which allows us to essentially run multiple, functionally separate exchanges on a single deployment of the LMAX Exchange.

We use the isolation of venues to reduce the amount of hardware we need to run our three separate liquidity pools (LMAX Professional, LMAX Institutional and LMAX Interbank), but that’s not all. We actually use the isolation venues provide to extend our testing all the way into production.

We have a subset of our acceptance tests which, using venues, are run against the exchange as it is deployed in production, using the same APIs our clients, MTF members and internal staff would use to test that the exchange is fully functional. We have an additional venue on each deployment of the exchange that is used to run these tests. The tests connect to the exchange via the same gateways as our clients (FIX, web, etc) and place real trades that match using the exact same systems and code paths as in the “real” venues. Code-wise there’s nothing special about the test venue, it just so happens that the only external parties that ever connect to it are our testing framework.

We don’t run our full suite of acceptance tests against the live exchange due to the time that would take and to ensure that we don’t affect the performance or latency of the exchange. Plus, we already know the code works correctly because it’s already run through continuous integration. Testing in live is focussed on verifying that the various components of the exchange are hooked up correctly and that the deployment process worked correctly. As such we’ve selected a subset of our tests that exercise the key functions of each of the services that make up the exchange. This includes things like testing that an MTF member can connect and provide prices, that clients can connect via either FIX or web and place orders that match against those prices and that the activity in the exchange is reported out correctly via trade reporting and market data feeds.

We run testing in live as an automated step at the start of our release, prior to making any changes, and again at the end of the release to ensure the release worked properly. If testing in live fails we roll back the release. We also run it automatically throughout the day as one part of our monitoring system, and it is also run manually whenever manual work is done or whenever there is any concern for how the exchange is functioning.

While we have quite a lot of other monitoring systems, the ability to run active monitoring like this against the production exchange, and go as far as actions that change state gives us a significant boost in confidence that everything is working as it should, and helps isolate problems more quickly when things aren’t.

April 14, 2014 12:36 PM

April 12, 2014

Adrian SuttonTesting@LMAX – Test Isolation

One of the most common reasons people avoid writing end-to-end acceptance tests is how difficult it is to make them run fast. Primary amongst this is the time required to start up the entire service and shut it down again. At LMAX with the full exchange consisting of a large number of different services, multiple databases and other components start up is far too slow to be done for each test so our acceptance tests are designed to run against the same server instance and not interfere with each other.

Most functions in the exchange can be isolated by simply creating one or more accounts and instruments that are unique to the particular test. The test simply starts by using the usual administration APIs to create the users and instruments it needs. Just this basic level of isolation allows us to test a huge amount of the exchange’s functionality – all of the matching behaviour for example. With the tests completely isolated from each other we can run them in parallel against the same server and dramatically reduce the amount of hardware required to run all the acceptance tests in a reasonable time.

But there are many functions in an exchange that cut across instruments and accounts – for example the exchange rates used to convert the profit or loss a user earns in one currency back to their account base currency. Initially tests that needed to control exchange rates could only be run sequentially, each one taking over the entire exchange while they ran and significantly increasing the time required for the test run. More recently however we’ve made the concept of currency completely generic – tests now simply create a unique currencies they can use and are able to set the exchange rates between those currencies without affecting any other tests. This makes our acceptance tests run significantly faster, but also means new currencies can be supported in the exchange without any code changes – just use the administration UI to create the desired currency.

We’ve applied the same approach of creating a completely generic solution even when there is a known set of values in a range of other areas, giving us better test isolation and often making it easier to respond to unexpected future requirements. Sometimes this adds complexity to the code or administration options which could have been avoided but the increased testability is well worth it.

The ultimate level of test isolation however is our support for multiple venues running in a single instance of the exchange. This essentially moves the exchange to a multi-tenancy model, a venue encapsulates all aspects of an exchange, allowing us to test back office reports that track how money moves around the exchange, reconciliation reports that cover all trades and many other functions that report on the state of the exchange as a whole.

With the LMAX Exchange now essentially running in three forms (Professional, Institutional and Interbank) this support for venues is more than just an optimisation for tests – we can co-host different instances of the exchange on the same production hardware, reducing not only the upfront investment required but also the ongoing maintenance costs.

Overall we’ve seen that making features more easily testable (using end-to-end acceptance tests) surprisingly often delivers business benefit making the investment well worth it.

April 12, 2014 12:36 PM

April 06, 2014

Blue HackersShift workers beware: Sleep loss may cause brain damage, new research says

April 06, 2014 08:48 AM

April 01, 2014

Blue HackersStudents and Mental Health at University

The Guardian is collecting experiences from students regarding mental health at university. I must have missed this item earlier as there are only a few days left now to get your contribution in. Please take a look and put in your thoughts! It’s always excellent to see mental health discussed. It helps us and society as a whole.

April 01, 2014 11:34 PM

Adrian SuttonTesting@LMAX – Time Travel and the TARDIS

Testing time related functions is always a challenge – generally it involves adding some form of abstraction over the system clock which can then be stubbed, mocked or otherwise controlled by unit tests in order to test the functionality. At LMAX we like the confidence that end-to-end acceptance tests give us but, like most financial systems, a significant amount of our functionality is highly time dependent so we need the same kind of control over time but in a way that works even when the system is running as a whole (which means it’s running in multiple different JVMs or possibly even on different servers).

We’ve achieved that by building on the same abstracted clock as is used in unit tests but exposing it in a system-wide, distributed way. To stay as close as possible to real-world conditions we have some reduced control in acceptance tests, in particular time always progresses forward – there’s no pause button. However we do have the ability to travel forward in time so that we can test scenarios that span multiple days, weeks or even months quickly. When running acceptance tests, the system clock uses a “time travel” implementation. Initially this clock simply returns the current system time, but it also listens for special time messages on the system’s messaging bus. When one of these time messages is received, the clock calculates the difference between the time specified in the message with the current system clock time and records that. From then on when it’s asked for the time, the clock adds on that difference to the current system time. As a result, when a time message is received time immediately jumps forward to that time and then continues advancing at the same rate as the system clock.

Like all good schedulers, ours are written in a way that ensures that events fire in the correct order even if time suddenly jumps forward past the point that the event should have triggered. So receiving a time message not only jumps forward, it also triggers all the events that should have fired during the time period we skipped, allowing us to test that they did their job correctly.

The time messages are published by a time travel service which is only run in our acceptance test environment – it exposes a JMX method which our acceptance tests use to set the current system time. Each service that uses time also exposes it’s current time and the time it’s schedulers have reached via JMX so when a test time travels we can wait until the message is received by each service and all the scheduled events have finished being run.

The TARDIS

The trouble with controlling time like this is that it affects the entire system so we can’t run multiple tests at the same time or they would interfere with each other. Having to run tests sequentially significantly increases the feedback cycle. To solve this we added the TARDIS to the DSL that runs our acceptance tests. The TARDIS provides a central point of control for multiple test cases running in parallel, coordinating time travel so that the tests all move forward together, without the actual test code needing to care about any of the details or synchronisation.

The TARDIS hooks into the DSL at two points – when a test asks to time travel and when a test finishes (by either passing or failing). When a test asks to time travel, the TARDIS tracks the destination times being requested and blocks the test until all tests are either ready to time or have completed. It then time travels to the earliest requested time and wakes up any tests that requested that time point so they can continue running. Tests that requested a time point further in the future remain paused waiting for the next time travel.

Since we had a lot of time travel tests already written before we invented the TARDIS this approach allowed us to start running them in parallel without having to rewrite them – the TARDIS is simply integrated into the DSL framework we use for all tests.

Currently the TARDIS only works for tests running in the same JVM, so essentially it allows test cases to run in parallel with other cases from the same test suite, but it doesn’t allow multiple test suites on separate Romero agents to run in parallel. The next step in its evolution will be to move the TARDIS out of the test’s DSL and provide it as an API from the time travel service on the server. At that point we can run multiple test suites in parallel against the same server. However, we haven’t yet done the research to determine what, if any, benefit we’d get from that change as different test suites may have very different time travel patterns and thus spend most of their time at interim time points waiting for other tests. Also the load on servers during time travel is quite high due to the number of scheduled jobs that can fire so running multiple test suites at once may not be viable.

* Time being in sync is actually a more complex concept than it first appears. The overall architecture of our system meant this approach to time actually did provide very accurate time sources relative to our main “source of all truth”, the exchange venue itself which is what we really cared about. Even so, anything that had to be strictly “in-sync” generated a timestamp in the service that triggered it and then included in the outgoing event which is the only sane way to do such things.

April 01, 2014 04:44 AM

March 30, 2014

Anthony TownsBitcoincerns

Bitcoincerns — as in Bitcoin concerns! Get it? Hahaha.

Despite having an interest in ecash, I haven’t invested in any bitcoins. I haven’t thought about it any depth, but my intuition says I don’t really trust it. I’m not really sure why, so I thought I’d write about it to see if I could come up with some answers.

The first thing about bitcoin that bothered me when I first heard about it was the concept of burning CPU cycles for cash — ie, setup a bitcoin miner, get bitcoins, …, profit. The idea of making money by running calculations that don’t provide any benefit to anyone is actually kind of offensive IMO. That’s one of the reasons I didn’t like Microsoft’s Hashcash back in the day. I think that’s not actually correct, though, and that the calculations being run by miners are actually useful in that they ensure the validity of bitcoin transfers.

I’m not particularly bothered by the deflationary expectations people have of bitcoin. The “wild success” cases I’ve seen for bitcoin estimate their value by handy wavy arguments where you take a crazy big number, divide it by the 20M max bitcoins that are available, and end up with a crazy big number per bitcoin. Here’s the argument I’d make: someday many transactions will take place purely online using bitcoin, let’s say 75% of all transactions in the world by value. Gross World Product (GDP globally) is $40T, so 75% of that is $30T per year. With bitcoin, each coin can participate in a transaction every ten minutes, so that’s up to about 52,000 transactions a year, and there are up to 20M bitcoins. So if each bitcoin is active 100% of the time, you’d end up with a GWP of 1.04T bitcoins per year, and an exchange rate of $28 per bitcoin, growing with world GDP. If, despite accounting for 75% of all transactions, each bitcoin is only active once an hour, multiply that figure by six for $168 per bitcoin.

That assumes bitcoins are used entirely as a medium of exchange, rather than hoarded as a store of value. If bitcoins got so expensive that they can only just represent a single Vietnamese Dong, then 21,107 “satoshi” would be worth $1 USD, and a single bitcoin would be worth $4737 USD. You’d then only need 739k bitcoins each participating in a transaction once an hour to take care of 75% of the world’s transactions, with the remaining 19M bitcoins acting as a value store worth about $91B. In the grand scheme of things, that’s not really very much money. I think if you made bitcoins much more expensive than that you’d start cutting into the proportion of the world’s transactions that you can actually account for, which would start forcing you to use other cryptocurrencies for microtransactions, eg.

Ultimately, I think you’d start hitting practical limitations trying to put 75% of the world’s transactions through a single ledger (ie hitting bandwidth, storage and processing constraints), and for bitcoin, that would mean having alternate ledgers which is equivalent to alternate currencies. That would involve some tradeoffs — for bitcoin-like cryptocurrencies you’d have to account for how volatile alternative currencies are, and how amenable the blockchains are to compromise, but, provided there are trusted online exchanges to convert one cryptocurrency into another, that’s probably about it. Alternate cryptocurrencies place additional constraints on the maximum value of bitcoin itself, by reducing the maximum amount of GWP happening in bitcoin versus other currencies.

It’s not clear to me how much value bitcoin has as a value store. Compared to precious metals, is much easier to transport, much easier to access, much less expensive to store and secure. On the other hand, it’s much easier to destroy or steal. It’s currently also very volatile. As a store of value, the only things that would make it better or worse than an alternative cryptocurrency are (a) how volatile it is, (b) how easy it is to exchange for other goods (liquidity), and (c) how secure the blockchain/algorithms/etc are. Of those, volatility seems like the biggest sticking point. I don’t think it’s unrealistic to imagine wanting to store, say, $1T in cryptocurrency (rather than gold bullion, say), but with only 20M bitcoins, that would mean each bitcoin was worth at least $50,000. Given a current price of about $500, that’s a long way away — and since there are a lot of things that could happen in the meantime, I think high volatility at present is a pretty plausible outcome.

I’m not sure if it’s possible or not, but I have to wonder if a bitcoin based cryptocurrency designed to be resistant to volatility would be implementable. I’m thinking (a) a funded exchange guaranteeing a minimum exchange rate for the currency, and (b) a maximum number of coins and coin generation rate for miners that makes that exchange plausible. The exchange for, let’s call it “bitbullion”, should self-fund to some extent by selling new bitbullion at a price of 10% above guidance, and buying at a price of 10% below guidance (and adjusting guidance up or down slightly any time it buys or sells, purely in order to stay solvent).

I don’t know what the crypto underlying the bitcoin blockchain actually is. I’m surprised it’s held up long enough to get to where bitcoin already is, frankly. There’s nominally $6B worth of bitcoins out there, so it would seem like you could make a reasonable profit if you could hack the algorithm. If there were hundreds of billions or trillions of dollars worth of value stored in cryptocurrency, that would be an even greater risk: being able to steal $1B would tempt a lot of people, being able to destroy $100B, especially if you could pick your target, would tempt a bunch more.

So in any event, the economic/deflation concerns seem assailable to me. The volatility not so much, but I’m not looking to replace my bank at the moment, so that doesn’t bother me either.

I’m very skeptical about the origins of bitcoin. The fact it’s the first successful cryptocurrency, and also the first definitively non-anonymous one is pretty intriguing in my book. Previous cryptocurrencies like Chaum’s ecash focussed on allowing Alice to pay Bob $1 without there being a record of anything other than Alice is $1 poorer, and Bob is $1 richer. Bitcoin does exactly the opposite, providing nothing more than a globally verifiable record of who paid whom how much at what time. That seems like a dream come true for law enforcement — you don’t even have to get a warrant to review the transactions for an account, because everyone’s accounts are already completely public. Of course, you still have to find some way to associate a bitcoin wallet id with an actual person, but I suspect that’s a challenge with any possible cryptocurrency. I’m not quite sure what the status of the digicash/ecash patents are/were, but they were due to expire sometime around now (give or take a few years), I think.

The second thing that strikes me as odd about bitcoin is how easily it’s avoided being regulated to death. I had expected the SEC to decide that bitcoins are a commodity with no real difference to a share certificate, and that as a consequence they can only be traded using regulated exchanges by financial professionals, or similar. Even if bitcoins still count as new enough to only have gotten a knee-jerk regulatory response rather than a considered one (with at $500 a pop and significant mainstream media coverage, I doubt), I would have expected something more along the lines of “bitcoin trading is likely to come under regulation XYZ, operating or using an unregulated exchange is likely to be a crime, contact a lawyer” rather than “we’re looking into it”. That makes it seem like bitcoin has influential friends who aren’t being very vocal in public, and conspiracy theories involving NSA and CIA/FBI folks suggesting leaving bitcoin alone for now might help fight crime, seem more plausible than ones involving Gates or Soros or someone secretly creating a new financial world order.

The other aspect is that it seems like there’s only really four plausible creators of bitcoin: one or more super smart academic types, a private startup of some sort, an intelligence agency, or a criminal outfit. It seems unlikely to me that a criminal outfit would create a cryptocurrency with a strong audit trail, but I guess you never know. It seems massively unlikely that a legitimate private company would still be secret, rather than cashing out. Likewise it seems unlikely that people who’d just done it because it seemed like an interesting idea would manage to remain anonymous still; though that said, cryptogeeks are weird like that.

If it was created by an intelligence agency, then its life to date makes some sense: advertise it as anonymous online cash that’s great for illegal stuff like buying drugs and can’t be tracked, sucker in a bunch of criminals to using it, then catch them, confiscate the money, and follow the audit trail to catch more folks. If that’s only worked for silk road folks, that’s probably pretty small-time. If bitcoin was successfully marketed as “anonymous, secure cryptocurrency” to organised crime or terrorists, and that gave you another angle to attack some of those networks, you could be on to something. It doesn’t seem like it would be difficult to either break into MtGox and other trading sites to gain an initial mapping between bitcoins and real identities, or to analyse the blockchain comprehensively enough to see through most attempts at bitcoin laundering.

Not that I actually have a problem with any of that. And honestly, if secret government agencies lean on other secret government agencies in order to create an effective and efficient online currency to fight crime, that’s probable a win-win as far as I’m concerned. One concern I guess I have though, is that if you assume a bunch of law-enforcement cryptonerds build bitcoin, is that they might also have a way of “turning it off” — perhaps a real compromise in the crypto that means they can easily create forks of the blockchain and make bitcoins useless, or just enough processor power that they can break it by bruteforce, or even just some partial results in how to break bitcoin that would destroy confidence in it, and destroy the value of any bitcoins. It’d be fairly risky to know of such a flaw, and trust that it wouldn’t be uncovered by the public crypto research community, though.

All that said, if you ignore the criminal and megalomaniacal ideas for bitcoin, and assume the crypto’s sound, it’s pretty interesting. At the moment, a satoshi is worth 5/10,000ths of a cent, which would be awesome for microtransactions if the transaction fee wasn’t at 5c. Hmm, looks like dogecoin probably has the right settings for microtransactions to work. Maybe I should have another go at the pay-per-byte wireless capping I was thinking of that one time… Apart from microtransactions, some of the conditional/multiparty transaction possibilities are probably pretty interesting too.

March 30, 2014 01:00 PM

Adrian SuttonTesting@LMAX – Distributed Builds with Romero

LMAX has invested quite a lot of time into building a suite of automated tests to verify the behaviour of our exchange. While the majority of those tests are unit or integration tests that run extremely fast, in order for us to have confidence that the whole package fits together in the right way we have a lot of end-to-end acceptance tests as well.

These tests deliver a huge amount of confidence and are thus highly valuable to us, but they come at a significant cost because end-to-end tests are relatively time consuming to run. To minimise the feedback cycle we want to run these tests in parallel as much as possible.

We started out by simply creating separate groupings of tests, each of which would run in a different Jenkins job and thus run in parallel. However as the set of tests changed over time we kept having to rebalance the groups to ensure we got fast feedback. With jobs finishing at different times they would generally also pick different revisions to run against so we generally weren’t getting any revision that all the tests had run against, reducing confidence in the specific build we picked to release to production each iteration.

To solve this we’ve created custom software to run our acceptance tests which we call Romero. Romero has three parts:

At the start of a test run, the revision to test is deployed to all servers, then Romero loads all the tests for that revision and begins allocating one test to each agent and assigning that agent a server to run against. When an agent finishes running a test it reports the results back to the server and is allocated another test to run. Romero also records information about how long a test takes to run and then uses that to ensure that the longest running tests are allocated first, to prevent them “overhanging” at the end of the run while all the other agents sit around idle.

To make things run even faster, most of our acceptance test suites are able to be run in parallel, running multiple test cases at once and also sharing a single server with multiple Romero agents. Some tests however are testing functions which affect the global state of the server and can’t share servers. Romero is able to identify these types of tests and use that information when allocating agents to servers.  Servers are designated as either parallel, supporting multiple agents, or sequential, supporting only a single agent. At the start of the run Romero calculates the optimal way to allocate servers between the two groups, again using historical information about how long each test takes.

All together this gives us an acceptance test environment which is self-balancing – if we add a lot of parallel tests one iteration servers are automatically moved from sequential to parallel to minimise the run time required.

Romero also has one further trick up its sleeve to reduce feedback time – it reports failures as they happen instead of waiting until the end of the run. Often a problematic commit can be reverted before the end of the run, which is a huge reduction in feedback time – normally the next run would already have started with the bad commit still in before anyone noticed the problem, effectively doubling the time required to fix.

The final advantage of Romero is that it seamlessly handles agents dying, even in the middle of running a test and reallocates that test to another agent, giving us better resiliency and keeping the feedback cycle going even in the case of minor problems in CI. Unfortunately we haven’t yet extended this resiliency to the test servers but it’s something we would like to do.

March 30, 2014 02:48 AM

March 25, 2014

Adrian SuttonTesting@LMAX – Test Results Database

One of the things we tend to take for granted a bit at LMAX is that we store the results of our acceptance test runs in a database to make them easy to analyse later.  We not only store whether each test passed or not for each revision, but the failure message if it failed, when it ran, how long it took, what server it ran on and a bunch of other information.

Having this info somewhere that’s easy to query lets us perform some fairly advanced analysis on our test data. For example we can find tests that fail when run after 5pm New York (an important cutoff time in the world of finance) or around midnight (in various timezones). It has also allowed us to identify subtly faulty hardware based on the increased rate of test failures.

In our case we have custom software that distributes our acceptance tests across the available hardware so it records the results directly to the database, however we have also parsed JUnit reports from XML and imported into the database that way.

However you get the data, having a historical record of test results in a form that’s easy to query is a surprisingly powerful tool and worth the relatively small investment to set up.

March 25, 2014 10:54 AM

Blue HackersRude vs. Mean vs. Bullying: Defining the Differences

March 25, 2014 01:16 AM

March 23, 2014

Adrian SuttonRevert First, Ask Questions Later

The key to making continuous integration work well is to ensure that the build stays green – ensuring that the team always knows that if something doesn’t work it was their change that broke it. However, in any environment with a build pipeline beyond a simple commit build, for example integration, acceptance or deployment tests, sometimes things will break.

When that happens, there is always a natural tendency to commit an additional change that will fix it. That’s almost always a mistake.

The correct approach is to revert first and ask questions later. It doesn’t matter how small the fix might be there’s a risk that it will fail or introduce some other problem and extend the period that the tests are broken. However since we know the last build worked correctly, reverting the change is guaranteed to return things to working order. The developers working on the problematic change can then take their time to develop and test the fix then recommit everything together.

Reverting a commit isn’t a slight against its developers and doesn’t even suggest the changes being made are bad, merely that some detail hasn’t yet been completed and so it’s not yet ready to ship. Having a culture where that’s understood and accepted is an important step in delivering quality software.

March 23, 2014 09:51 AM

March 22, 2014

Anthony TownsBeanBag — Easy access to REST APIs in Python

I’ve been doing a bit of playing around with REST APIs lately, both at work and for my own amusement. One of the things that was frustrating me a bit was that actually accessing the APIs was pretty baroque — you’d have to construct urls manually with string operations, manually encode any URL parameters or POST data, then pass that to a requests call with params to specify auth and SSL validation options and possibly cookies, and then parse whatever response you get to work out if there’s an error and to get at any data. Not a great look, especially compared to XML-RPC support in python, which is what REST APIs are meant to obsolete. Compare, eg:

server = xmlrpclib.Server("http://foo/XML-RPC")
print server.some.function(1,2,3,{"foo": "bar"})

with:

base_url = "https://api.github.com/"
resp = requests.get(base_url + "/repos/django/django")
if resp.ok:
    res = resp.json()
else:
    raise Exception(r.json())

That’s not to say the python was is bad or anything — it’s certainly easier than trying to do it in shell, or with urllib2 or whatever. But I like using python because it makes the difference between pseudocode and real code small, and in this case, the xmlrpc approach is much closer to the pseudocode I’d write than the requests code.

So I had a look around to see if there were any nice libraries to make REST API access easy from the client side. Ended up getting kind of distracted by reading through various arguments that the sorts of things generally called REST APIs aren’t actually “REST” at all according to the original definition of the term, which was to describe the architecture of the web as a whole. One article that gives a reasonably brief overview is this take on REST maturity levels. Otherwise doing a search for the ridiculous acronym “HATEOAS” probably works. I did some stream-of-consciousness posts on Google-Plus as well, see here, here and here.

The end result was I wrote something myself, which I called beanbag. I even got to do it mostly on work time and release it under the GPL. I think it’s pretty cool:

github = beanbag.BeanBag("https://api.github.com")
x = github.repos.django.django()
print x["name"]

As per the README in the source, you can throw in a session object to do various sorts of authentication, including Kerberos and OAuth 1.0a. I’ve tried it with github, twitter, and xero’s public APIs with decent success. It also seems to work with Magento and some of Red Hat’s internal tools without any hassle.

March 22, 2014 08:47 AM

March 21, 2014

Adrian SuttonJavassist & Java 8 – invalid constant type: 15

Here’s a fun one, we have some code generated using javassist that looks a bit like:

public void doStuff(byte actionId) {
switch (actionId) {
case 0: doOneThing(); break;
case 1: doSomethingElse(); break;
default:
throw new IllegalArgumentException("Unknown action: " + actionId);
}
}

This works perfectly fine on Java 6 and Java 7 but fails on Java 8.  It turns out the problematic part is "Unknown action: " + actionId. When run on Java 8 that throws “java.io.IOException: invalid constant type: 15″ in javassist.bytecode.ConstPool.readOne.

The work around is simple, just do the conversion from byte to String explicitly:

throw new IllegalArgumentException("Unknown action: " + 
String.valueOf(actionId));

There may be a more recent version of Javassist which fixes this, I haven’t looked as the work around is so trivial (and we only hit this problem in the error handling code which should never be triggered anyway).

March 21, 2014 04:32 AM

March 20, 2014

Blue HackersOn inclusiveness – diversity

Ashe Dryden writes about Dissent Unheard Of – noting “Perhaps the scariest part of speaking out is seeing the subtle insinuation of consequence and veiled threats by those you speak against.” From my reading of what goes on, much of it is not even very subtle, or veiled. Death and rape threats. Just astonishingly foul. This is not how human beings should treat each other, ever. I have the greatest respect for Ashe, and her courage in speaking out and not being deterred. Rock on Ashe! The reason I write about it here on BlueHackers.org is that I think there is a fair overlap between issues of harassment and other nasties and depression, and it will affect individuals, companies, conferences and other organisations alike. So it’s important to call it out and actually deal with it, not regard it as someone else’s problem. Our social and work place environments need to be inclusive for all, it’s as simple as that. Inclusiveness is key to achieving diversity, and diverse environments are the most creative and innovative. If a group is not diverse, you’re missing out in many ways, personally as well as commercially. Please read Ashe’s thoughtful analysis of what causes people and places to often not be inclusive – such understanding is a good step towards effecting change. Is there something you can do personally to improve matters in an organisation? Please tell about your thoughts, actions and experiences. Let’s talk about this!

March 20, 2014 05:37 AM

March 18, 2014

Adrian SuttonThe truth is out: money is just an IOU

The truth is out: money is just an IOU, and the banks are rolling in it:
What the Bank of England admitted this week is that none of this is really true. To quote from its own initial summary: “Rather than banks receiving deposits when households save and then lending them out, bank lending creates deposits” … “In normal times, the central bank does not fix the amount of money in circulation, nor is central bank money ‘multiplied up’ into more loans and deposits.” In other words, everything we know is not just wrong – it’s backwards. When banks make loans, they create money.
The complexity and subtlety in our financial system is delightful and frightening all at the same time. Almost everything has multiple contributing factors that are impossible to isolate and subtle shifts in world view like this can have huge implications on decision making.

March 18, 2014 11:50 PM

March 02, 2014

Ben MartinThe vs1063 untethered

Late last year I started playing with the vs1063 DSP chip, making a little Audio Player. Quite a lot of tinkering is available for in/output on such a project, so it can be an interesting way to play around with various electronics stuff without it feeling like 10 das blinken lights tutorials in a row.

Over this time I tried using a 5 way joystick for input as well as some more unconventional mechanisms. The Blackberry Trackballer from SparkFun is my favourite primary input device for the player. Most of the right hand side board is to make using that trackballer simpler, boiling it down to a nice I2C device without the need for tight timing or 4 interrupt lines on the Arduino.




The battery in the middle area of the screen is the single 16850 protected battery cell that runs the whole show. The battery leads via a switch to a 3.7v -> 5v step up to provide a clean power input. Mid way down the right middle board is a low dropout 3v3 regulator from which the bulk of the board is running. The SFE OLED character display wants its 5v, and the vs1063 breakout board, for whatever reason, wants to regulate its own 3v3 from a 5v input. Those are the two 5v holdouts on the board.

Last blog post I was still using a Uno as the primary microcontroller. I also got rid of that Uno and moved to an on board 328 running at 8Mhz and 3v3. Another interesting leaning experience was finding when something 'felt' like it needed a cap. At times the humble cap combo is a great way to get things going again after changing a little part of the board. After the last clean up it should all fit onto 3 boards again, so might then also fit back into the red box. Feels a bit sad not to have broken the red box threshold anymore :|

Loosely the board comes out somewhat like the below... there are things missing from the fritzing layout, but its a good general impression. The trackballer only needs power, gnd, I2C, and MR pin connections. With reasonable use of SMD components the whole thing should shrivel down to the size of the OLED PCB.

Without tuning the code to sleep the 328 or turn off the attin84 + OLED screen (oled is only set all black), it uses about 65mA while playing. I'm running the attiny84 and OLED from an output pin on the 4050 level shifter, so I can drop power to them completely if I want. I can expect above 40hrs continuous play without doing any power optimization. So its all going to improve from there ;)

March 02, 2014 01:33 PM

February 18, 2014

Daniel DevineSnorkelling at Perth

I was in Perth for linux.conf.au 2014 and for a place that I've heard sucks, I'm mostly seeing the opposite.

/blog_images/perth_1b.jpg

The view from near Mudurup Rocks.

Like Dirk Hohndel's talk, this post is not so much about technology as it is about the sea. I meant to write this post before I had even left Perth but I ended up getting sidetracked by LCA and general horseplay.

After Kate Chapman's enjoyable keynote I decided to blow that popsicle stand and snorkel at Cottesloe Reef (pdf) which is a FHPA (Fish Habitat Protection Area) just 30 minutes from the conference venue by bus.

Read more…

February 18, 2014 08:30 AM

February 10, 2014

Adrian SuttonInterruptions, Coding and Being a Team

Esoteric Curio – 折伏 – as it relates to coding
Our job as a team is to reinforce each other and make each other more productive. While I attempt to conquer some of the engineering tasks as well as legal tasks, and human resource tasks I face, I need concentration and it world certainly help to not be interrupted. I used to think I needed those times of required solitute pretty much all the time. It turns out that I have added far more productivity by enabling my teams than I have personally lost due to interruptions, even when it was inconvenient and frustrating. So much so that Ive learned to cherish those interruptions in the hope that, on serendipitous occasion, they turn into a swift, sprititual kick to the head. After all, sometimes all you need for a fresh persective is to turn out the light in the next room; and, of course, to not have avoided the circumstance that brought you to do it.
Too often we confuse personal productivity, or even just the impression of it, for team productivity. So often though having individuals slow down is the key to making the team as a whole speed up.  See also Go Faster By Not Working and The Myth of Measurable Productivity.

February 10, 2014 03:21 AM

February 04, 2014

Ben Martinattiny screen driver for parallel controlled character displays

This is the "details" post for controlling a 7 bit parallel OLED character display using an attiny as a display driver. See the previous post for an overview of the setup and video.

And now... the code. Apologies for some of the names of directories. Instead of branching and whatnot in git I just copied the code to library(n+1) and added the next feature in line as I went. I might at some stage do a write up detailing the stepping stones to get from the bare attiny84 to the code below.

To make it simpler, I'll call the "main" arduino the Uno. That is the one what wants to run the show. The screen is attached to the attiny84 which I'll just call the tiny84.

This is the code that one uses on the Uno: oled_clientspireal.inoIt is designed to look very close to the example it was based on (linked in the second line of its header comment).

The attiny84 runs this attiny_oled_server_spi2.inoThe heavy lifting is done in SimpleMessagePassing5 which I link next. The main part of loop() is to process each full message that we get from SimpleMessagePassing5. There is a timeout() logic in there after that which allows the tiny84 to somewhat turn off the screen after a period of no activity on the SPI bus. noDisplay() doesn't turn off the OLED internal power, so you are only down to about 6mA there.
The SimpleMessagePassing5 does two main things (coupling SPI into message byte boundaries in a single file, the academics would be unhappy) SimpleMessagePassing5.cppEarlier version of SMP did
USICR |= (1 << USIOIE); 
in init(). That line turns on the ISR for USI SPI overflow. But that is itself now done in a pin change ISR so that the USI is only active when the attiny84 has been chip selected.

serviceInput() is a fairly basic state machine, mopping up bytes until we have a whole "message" from the SPI master. If there is a complete message available then takeMessage() will return true. It is then up to the caller to do smp.buffer().get() to actually grab and disregard the bytes that comprise a message.

I'm guessing that at some stage I'd add a "Message" class that takeMessage could then return. The trick will be to do that without needing to copy from smp.buffer() so that sram is kept fairly low.
The Uno uses the shim class Shim_CharacterOLEDSPI2.cppwhich just passes the commands along to the attiny84 for execution.
This all seems to smell a bit like it wants to use something like Google protocol buffers for marshaling but the hand crafted code works :) In some ways using GPB for such a simple interface as CharacterOLEDSPI2 might be well be considered over engineering, but I'm still tempted.

February 04, 2014 12:22 PM

January 30, 2014

Blue HackersA novel look at how stories may change the brain

January 30, 2014 12:32 PM

January 29, 2014

Ben MartinRipple counting trackballer hall effects

Sparkfun sells a breakout with a blackberry trackballer on it and 4 little hall effect sensors. One complete ball rotation generates 11 hall events. So the up, down, left, and right pins will pulse 11 times a rotation. The original thought was to hook those up to DPins on the arduino and use an interrupt that covered that block of pins to count these hall events as they came in. Then in loop() one could know how many events had happened since the last check and work from there. Feeling like doing it more in hardware though, I turned to the 74HC393 which has two 4bit ripple counters in it. Since there were four lines I wanted to count I needed two 393 chips. The output (the count) from the 393 is offered in four lines per count (its a 4 bit counter). So I then took those outputs and fed them into the MCP23x17 pin muxer which has 16 in/out pins on it. I used the I2C version of the MCP chip in this case. So it then boils down to reading the chip when you like and pulsing the MR pin on the 393s to reset the counters.


In the example sketch I pushed to github, I have a small list of choices that you can scroll through and if you stop scrolling for "a while" then it selects the current entry. Which just happens to be the exact use case I am planning to put this hardware to next. Apart from feel good factor, this design should have less chance of missing events if you already have interrupt handlers which themselves might take a while to execute.

January 29, 2014 12:50 PM

January 14, 2014

Adrian SuttonHypercritical: The Road to Geekdom

Geekdom is not a club; it’s a destination, open to anyone who wants to put in the time and effort to travel there…

…dive headfirst into the things that interest you. Soak up every experience. Lose yourself in the pursuit of knowledge. When you finally come up for air, you’ll find that the long road to geekdom no longer stretches out before you. No one can deny you entry. You’re already home.

via Hypercritical: The Road to Geekdom.

January 14, 2014 10:16 AM

Adrian SuttonMyth busting mythbusted

As a follow up to the earlier link regarding the performance of animations in CSS vs JavaScript, Christian Heilmann – Myth busting mythbusted:
Jack is doing a great job arguing his point that CSS animations are not always better than JavaScript animations. The issue is that all this does is debunking a blanket statement that was flawed from the very beginning and distilled down to a sound bite. An argument like “CSS animations are better than JavaScript animations for performance” is not a technical argument. It is damage control. You need to know a lot to make a JavaScript animation perform well, and you can do a lot of damage. If you use a CSS animation the only source of error is the browser. Thus, you prevent a lot of people writing even more badly optimised code for the web.
Some good points that provide balance and perspective to the way web standards evolve and how to approach web development.

January 14, 2014 04:19 AM

January 13, 2014

Adrian SuttonMyth Busting: CSS Animations vs. JavaScript

 

Myth Busting: CSS Animations vs. JavaScript:

As someone who’s fascinated (bordering on obsessed, actually) with animation and performance, I eagerly jumped on the CSS bandwagon. I didn’t get far, though, before I started uncovering a bunch of major problems that nobody was talking about. I was shocked.

This article is meant to raise awareness about some of the more significant shortcomings of CSS-based animation so that you can avoid the headaches I encountered, and make a more informed decision about when to use JS and when to use CSS for animation.

Some really good detail on performance of animations in browsers. I hadn’t heard of GSAP previously but it looks like a good option for doing animations, especially if you need something beyond simple transitions.

January 13, 2014 09:19 PM

January 09, 2014

Adrian SuttonAre iPads and tablets bad for young children?

The Guardian: Are iPads and tablets bad for young children?

Kaufman strongly believes it is wrong to presume the same evils of tablets as televisions. “When scientists and paediatrician advocacy groups have talked about the danger of screen time for kids, they are lumping together all types of screen use. But most of the research is on TV. It seems misguided to assume that iPad apps are going to have the same effect. It all depends what you are using it for.”

It all depends what you are using it for. I can’t think of a better answer to any question about whether a technology is good or bad. Kids spending time staring at an iPad watching a movie probably isn’t giving them much benefit apart from some down time to have a break, but sitting with your child playing games or reading stories on the iPad has many great benefits.

As a parent, I sometimes find this unsettling. But I try to be mindful that it is an open question whether it is unsettling because there is something wrong with it, or because it wasn’t a feature of my own childhood.

We’re often unaware of how strongly we are biased towards the way we were brought up. People who grew up in a family with two children generally want to have two children themselves. People who grew up on a farm think its important for their kids to get experience on a farm etc. Even when you’re aware of that, it’s easy to forget it works the other way too – you may view certain activities as undesirable for your children purely because you didn’t do them in your childhood.

So what should a parent who fears their child’s proficiency on a tablet do? … “You need to acquire proficiency,” she says. “You can acquire it from them. They can teach you.”

This is probably the best advice in the entire article. Don’t be afraid of doing things with your child just because you aren’t familiar with them or confident in how to do them. Discovering new things together or having your child teach you something is one of the best ways for you both to learn and grow as people.

Finally, regarding the case of a four year old who was supposedly addicted to iPad use: 

that “case”, so eagerly taken up by the tabloids, comprised a single informal phone call with a parent, in which <the doctor> gave advice. There was no followup treatment. He doesn’t believe that “addiction” is a suitable word to use of such young children.

So don’t believe everything you hear in the media…

January 09, 2014 10:39 AM

January 03, 2014

Blue HackersBlueHackers @ linux.conf.au 2014 Perth

BlueHackers.org is an informal initiative with a focus on the wellbeing of geeks who are dealing with depression, anxiety and related matters. This year we’re more organised than ever with a number of goodies, events and services! - BlueHackers stickers - BlueHackers BoF (Tuesday) - BlueHackers funded psychologist on Thursday and Friday - extra resources and friendly people to chat with at the conference Details below… This year, we’ll have a professional psychologist, Alyssa Garrett (a Perth local) funded by BlueHackers, LCA2014 and Linux Australia. Alyssa will be available Thursday and part of Friday, we’ll allocate her time in half-hour slots using a simple (paper-based) anonymous booking system. Ask a question, tell a story, take time out, find out what psychology is about… particularly if you’re wondering whether you could use some professional help, or you’ve been procrastinating taking that step, this is your chance. It won’t cost you a thing, and it’s absolutely confidential. We just offer this service because we know how important it is! There will be about 15 slots available. You can meet Alyssa on Tuesday afternoon already, at the BoF. Just to say hi! The booking sheet will be at the BoF and from Wednesday near the rego desk. The BlueHackers BoF is on Tuesday afternoon, 5:40pm – 6:40pm (just before the speakers dinner). Check the BoF schedule closer to the time to see which room we’re in. The format will be similar to last year: short lightning talks of people who are happy to talk – either from their own experience, as a support, or professional. No therapy, just sharing some ideas and experience. You can prep slides if you want, but not required. Anything discussed during the BoF stays there – people have to feel comfortable. We may have some additional paper resources available during the conference, and a friendly face for an informal chat for part of the week. Every conference bag will have a couple of BlueHackers stickers to put on your laptop and show your quiet support for the cause – letting others know they’re not alone is a great help. If you have any logistical or other questions, just catch me (Arjen) at the conference or write to:  l i f e (at) b l u e h a c k e r s (dot) o r g

January 03, 2014 08:08 AM

January 02, 2014

Blue HackersHow emotions are mapped in the body

January 02, 2014 10:54 PM

December 23, 2013

Blue HackersFeet up the Wall

Gina Rose of Nourished Naturally writes:
I often spend 5-30 minutes a day with my feet up the wall. What’s going on in this pose? Your femur bones are dropping into your hip sockets, relaxing the muscles that help you walk and support your back. Blood is draining out of your tired feet and legs. Your nervous system is getting a signal to slow down. Stress release and recovery time. This position is great for sore legs, helps with digestion & circulation as well as thyroid support. If you suffer from insomnia try this before bed.
I’ve done this at times but at the time never thought through why it might be beneficial. Worth a try! And as they say, it doesn’t hurt to try – but of course it could and if it does hurt, obviously stop straight away.

December 23, 2013 11:58 PM

December 19, 2013

Daniel DevineTLDR: Getting a SSL Certificate Fingerprint in Python

For a project I am working on I need to know the SHA1 hash (the fingerprint) of a DER encoded (binary) certificate file. I found it strange that nobody had offered up a here's a hunk of code for something so simple. Any novice programmer would probably eventually arrive at this solution on their own but for the sake of those Googling, here it is.

Read more…

December 19, 2013 05:21 AM

December 17, 2013

Ben MartinThe kernel of an Arduino audio player

The vs10x3 DSP chips allow you to decode or encode various audio formats. This includes codec to mp3, ogg vorbis, as well as decoding flac and support for many other formats. So naturally I took the well trodden path and decided to use an arduino to build a "small" audio player. The breadboard version is shown below. The show is driven by the Uno on the right. The vs1063 is on a Sparkfun breakout board in the top of image. The black thing you see to the right of the vs1063 is an audio plug. The bottom IC in the line on the right is an attiny84. But wait you say, don't you already have the Uno to be the arduino in the room? But yes I say, because the attiny84 is a SPI slave device which is purely a "display driver" for the 4 bit parallel OLED screen at the bottom. Without having to play tricks overriding the reset pin, the tiny84 has just enough for SPI (4), power (2), and the OLED (7) so it's perfect for this application.

The next chip up from the attiny84 is an MCP23017 which connects to the TWI bus and provides 16 digital pins to play with. There is a SPI version of the muxer, the 23S17. The muxer works well for buttons and chip selects which are not toggled frequently. It seems either my library for the MCP is slow or using TWI for CS can slow down SPI operations where selection/deselection is in a tight loop.

Above the MCP is a 1Mbit SPI RAM chip. I'm using that as a little cache and to store the playlist for ultra quick access. There is only so much you can do with the 2kb of sram on the arduino. Above the SPI ram is a 4050 and SFE bidirectional level shifter. The three buttons in bottom left allow you to control the player reasonably effectively. Though I'm waiting for my next red box day for more controller goodies to arrive.

I've pushed some of the code to control the OLED screen from the attiny up to github, for example at:
https://github.com/monkeyiq/attiny_oled_server_spi2
I'll probably do a more just write up about those source files and the whole display driver subsystem at some stage soon.

December 17, 2013 12:47 PM

December 09, 2013

Ben MartinAsymmetric Multiprocessing at the MCU level

I recently got a 16x2 character OLED display from sparkfun. Unfortunately that display had a parallel interface so wanted to mop up at least 7 digital pins on the controlling MCU. Being a software engineer rather than an electrical engineer the problem looked like one that needed another small microcontoller to solve it. The attiny84 is a 14 pin IC which has USI SPI and, once you trim off 4 pins for SPI, 3 for power, gnd, and reset, leaves you 7 digital pins to do your bidding with. This just so happens to be the same 7 pins one needs to drive the OLED over its 4 bit parallel interface!

<iframe allowfullscreen="" frameborder="0" height="281" mozallowfullscreen="" src="http://player.vimeo.com/video/81094386" webkitallowfullscreen="" width="500"></iframe>
Driving an OLED screen over SPI using an attiny84 as a display driver from Ben Martin on Vimeo.

Shown in the video above is the attiny84 being used as a display driver by an Arduino Uno. Sorry about having the ceiling fan on during capture. On the Uno side, I have a C++ "shim" class that has the same interface as the class used to drive the OLED locally. The main difference is you have to tell the shim class which pin to use as chip select when it wants to talk to the attiny84. On the attiny commands come in over SPI and the real library that can drive the OLED screen is used to issue the commands to the screen.

The fuss about that green LED is that when it goes out, I've cut the power to the attiny84 and the OLED. Running both the later with a reasonable amount of text on screen uses about 20mA, with the screen completely dark and the tiny asleep that drops to 6mA. That can go down to less than 2mA if I turn off the internal power in the OLED. Unfortunately I haven't worked out how to turn the power back on again other than resetting the power to the OLED completely. But being able to drop power completely means that the display is an optional extra and there is no power drain if there is nothing that I want to see.

Another way to go with this is using something like an MCP23S17 chip as a pin muxer and directly control the screen from the Uno. The two downsides to that design are the need to modify the real OLED library to use the pin muxer over SPI, and that you don't gain the use of a dedicated MCU to drive the screen. An example of the later is adding a command to scroll through 10 lines of text. The Uno could issue that command and then forget about the display completely while the attiny handles the details of scrolling and updating the OLED.

Some issues of doing this were working out how to tell the tiny to go into SPI slave mode, then getting non garbage from the SPI bus when talking to the tiny, and then working out acceptable delays for key times. When you send a byte to the tiny the ISR that accepts that byte will take "some time" to complete. Even if you are using a preallocated circular array to dispense with the new byte as quickly as possible, the increment and modulo operations take time. Time that can be noticeable when the tiny is clocked at 8mhz and the Uno at 16mhz and you ramp up the SPI clock speed without mercy.

As part of this I also made a real trivial SPI calculator for the attiny84. By trivial I mean store, add, and print are the only operations and there is only a single register for the current value. But it does show that the code that interacts with the SPI bus on the client and server side gets what one would expect to get from a sane adding machine. I'll most likely be putting this code up on github once I clean it up a little bit.

December 09, 2013 03:50 AM

Paul GearonDDD on Mavericks

Despite everything else I'm trying to get done, I've been enjoying some of my time working on Gherkin. However, after merging in a new feature the other day, I caused a regression (dammit).

Until now, I've been using judiciously inserted calls to echo to trace what has been going on in Gherkin. That does work, but it can lead to bugs due to interfering with return values from functions, needs cleanup, needs the code to be modified each time something new needs to be inspected, and can result in a lot of unnecessary output before the required data shows up. Basically, once you get past a particular point, you really need to adopt a more scalable approach to debugging.

Luckily, Bash code like Gherkin can be debugged with bashdb. I'm not really familiar with bashdb, so I figured I should use DDD to drive it. Like most Gnu projects, I usually try to install them with Fink, and sure enough, DDD is available. However, installation failed, with an error due to an ambiguous overloaded + operator. It turns out that this is due to an update in the C++ compiler in OSX Mavericks. The code fix is trivial, though the patch hasn't been integrated yet.

Downloading DDD directly and running configure got stuck on finding the X11 libraries. I could have specified them manually, but I wasn't sure which ones fink likes to use, and the system has several available (some of them old). The correct one was /usr/X11R6/lib, but given the other dependencies of DDD I preferred to use Fink. However, until the Fink package gets updated then it won't compile and install on its own. So I had figured I should try to tweak Fink to apply the patch.

Increasing the verbosity level on Fink showed up a patch file that was already being applied from:
/sw/fink/dists/stable/main/finkinfo/devel/ddd.patch

It looks like Fink packages all rely on the basic package, with a patch applied in order to fit with Fink or OS X. So all it took was for me to update this file with the one-line patch that would make DDD compile. One caveat is that Fink uses package descriptions that include checksums for various files, including the patch file. My first attempt at using the new patch reported both the expected checksum and the one that was found, so that made it easy to update the .info file.

If you found yourself here while trying to get Fink to install DDD, then just use these 2 files to replace the ones in your system:

If you have any sense, you'll check that the patch file does what I say it does. :)

Note that when you update your Fink repository, then these files should get replaced.

December 09, 2013 12:34 AM

November 20, 2013

Blue HackersGo Home on Time Day | 20th Nov 2013

November 20, 2013 12:46 AM

November 16, 2013

Ben MartinSparkfun vs1063 DSP breakout

The Sparkfun vs1063 breakout gives you a vs1063 chip with a little bit of supporting circuit. You have to bring your own microcontroller, sdcard or data source, and level shifting.


One thing which to my limited knowledge seems unfortunate is that the VCC on the breakout has to be 5v. There are voltage regulators on the vs1063 breakout which give it 3.3v, 2.8v and 1.8v. Since all vregs are connected to VCC and it wants to make its own 3v3 then I think you have to give 5v as VCC on the breakout board.

With the microsd needing to run on 3.3v I downshifted the outbound SPI ports, the sdcard chip select, and the few input pins to the vs1063 board. Those are the two little red boards on the breadboard. The sdcard is simply on a breakout which does no level shifting itself. The MISO pin is good to go without shifting because a 3.3v will trip as high on a 5v line. Likewise the interrupt pin DREQ which goes to pin 2 on the Uno doesn't have any upshifting.

I had a few issues getting the XDCS, XCS, and DREQ to all play well from the microcontroller. A quick and nasty hack was to attach that green LED in the middle of the photo to the interrupt line so I could see when it was tripped. During playback it gives a PWM effect as 32byte blocks of data are shuffled to the vs1063 as it keeps playing. The DREQ is fired when the vs1063 can take at least another 32 bytes of data from the SPI bus to it's internal buffer. Handy to have the arduino doing other things and also servicing that interrupt as needed.

I'm hoping to use a head to tail 3v3 design for making a mobile version of the circuit. I would have loved to use 2xAA batteries, but might have to go another way for power. Unfortunately the OLED screen I have is 5v but there are 3v3 onces of those floating around so I can get a nice modern low power display on there.

The next step is likely to prototype UI control for this. Which I'll probably use the 5v OLED in the meantime to get a feel for how things will work. I get the feeling that an attiny might sit between the main arduino and the parallel OLED screen so it can be addressed on the SPI bus too. Hmm, attiny going into major power save mode until chip selected back into life.

November 16, 2013 03:08 AM

November 13, 2013

Tony BilbroughDay 8 Journalism – An intensive class wrap up

The fun is all had, the story wrote,

There’s smiles and tears we cannot quote,

Now new friends depart this week.

It’s all about the thing we learned,

with so much more to seek.

 

Not quite Haiku, or nearly Welsh, but it is definitely mine

 Ian Skidmore’s final Blog, a grand farewell to us all

http://skidmoresisland.blogspot.com.au/

 Just one of the eulogies by his peers, in Gentlemen Ranters

http://www.gentlemenranters.com/

 So the course is over, three years jammed into a few very, very fast and furious days.

And with this end, decisions must be made……. whether to continue learning more about journalism, and to a lesser extent, whether I have improved writing skills enough to continue the blog.

Then if a blog is to continue, what could the time scale be between ‘writes’, from a practical perspective?

 Blogging daily, as done during the course, has really severe drawbacks socially.

With a rather limited journalistic ability, one needs many hours to assemble thoughts of events, or activity, to get them into some sort of linear order.

The alternatives are either; loose those three or four hours of social activity time, or as in the case of this exercise, loose sleep time!

Four hours sleep each night on a continuous basis, just would not work for any extended period. But short bursts like this for a week or two might work ok, I guess .

 Much more importantly though – what did I get out of this course, in terms of writing ability?

Well unequivocally, a lot.

But for the sum of what I learned to work effectively, I would prolly need to become a lot more active in the writing side of at least some of the social groups I am active with.

This would be a clear fork in life’s path, and my guess is that this is another of those ‘dither points’. Ones that will oscillate continuously until someone or something gives direction more than just a nudge.

 Looking back, this final day in class came on with a bigger rush, than any adrenalin junkie could describe.

We participants had used so many different formats to gather data, and tell a story for the final Journal.

A mix of photographs, research through the Internet, phone calls recorded (with permission, of course), Vox Pop interviews, and recollection interviews, all with professional results.

 I was so lucky that my story was edited first, while minds were fresh, and every one looked relaxed and beautiful.

But as the day wore on, we were absolutely stunned by the the amount of editing work needed on our stories, Think for a moment – a story like this that takes you only a few minutes to read, took our editor an hour or more to sort out to an acceptable stage, for the lay out process to begin.

I was asked by one of the other students to include this next bit of inf on my story, as example of what editors have to go thru when passing a story up the line for publication.

Superfluous commas – thirty nine, syntax errors – nine, extra spaces – 28, missing full stops – three, two repetitive statements, an out of place piece in a paragraph. There was even a typo, in-spite of the spell checker!

Other writers also had the order of the story a little crossed up, or parts of the tale needing more clarification.

Interesting to me was that all the stories had similar editing problems, and every single one of us believed that we had submitted a perfect job to the editor. Sigh

 Summarising our course of ‘Three years of Journalism – in eight days’, and to show our educators that we did really listen most of the time :

Always use ‘Who did What’ with attributed statements.

Always think, Who Where What When Why and How, to write

Abide by the law, its there, so accept.

Act ethically, be fair and meet audience expectations.

Build, and be a part of your community.

Be Credible, be able to back up your statements,

And finally, check your Grammar, and check your Grammar, and check……

 

Thank you Ursula and Bec and … for your time and patience with us all.

You know we all had a great time, finding we could do so much of what we had believed was impossible to achieve on day one, and reaching an understandable end for day eight.

You are a great teacher, and you did it well, Missus.

 Gentlemen Ranters tend to say, “if it isn’t totally accurate, at least it is accurate enough…”


November 13, 2013 01:25 PM

November 12, 2013

Tony BilbroughDay 7 Bringing on the blooming plants

It rained again last night, and it looks like the gardener will have to service the Lawn mower before long.

Ha, one can only dream!

 Had a thought about this course on my way to The Edge, in a carriage on a train.

I will probably never be able to watch another news story unfold with out thinking of the cut-aways, voice overs and other artifacts, that are used to create a more interesting television tale.

 Now, as they say in the big world, ‘The story is wrote, the tale is told’, and the early part of the day was spent having the aforementioned scrutinised for Grammar lapses, Glaring omissions and Gloop like rubbish, by our long suffering coordinator.

The best bit was that The Piece was not completely discarded into that big dustbin, bottom left of screen (on an Ubuntu operating system). It survived with just a modest little rewrite.

 So, with a huge sigh of relief, the remaining hours were spent making the suggested changes and generally tarting up the script, and finally making it LOOK as if it had some potential to be turned into a printed story, on the morrow.

Fail Again…. Never shout words with capital letters, see?

 This was polishing tartiness in the truest sense. I was amazed at how many punctuation errors there were. Many of them were caused because those sorts of punctuation had been totally erased Terminator style, by the long thin cane of school days.

Not talking about forgetting full stops or commas, but rather the correct use for placing bracket types, hyphens, colons and semi colons. And I know a Grammar Nazi who loves nothing better than……..

 There was a lunch break to day – for some. But most of us spent the time huddled in desperation over our laptops, trying to have the finished version out by Noon +30 minutes.

 The afternoon session turned into a great surprise (well almost everything turns into a surprise for me, I forget so quickly).

Intensive Marketing. A lot the discussion related to our earlier work, on the various methods of defining and maintaining a consistency in Profiles, Marketing and measuring the effectiveness of a campaign throughout a given event using a variety of tools that I will not expand upon, because the unloved and unclean won’t have the slightest idea what I am writing about. The last sentence a bit of a lung-full, used to get the taught bit over as quickly as.

 How the instructor managed to shoe horn so much more usable data into our already saturated brains, one can only live in amazement and wonder.

 The days wrap up was clear and concise, and in precisely the language I understand.

Do the final layout tomorrow morning, and we will all party on, thru afternoon and night.

 Had a nice little interlude afterwards, when one of the course members came home with me, to take a few photos of the Native Bee’s, living in an old tree trunk down the bottom of the garden.

I hope that some of the photo’s work, but the bee’s were not very cooperative.

 The sunshine departs, with no sign of rain tonight, so I’m off to have a 7 kilometre run with Brisbane Southside Hash House Harriers tonight

http://www.brisbanesouthside.com/

 Awesome, I finally got a Website tag into this blog. Learning, all about learning, you know.

 


November 12, 2013 09:14 AM

Blue HackersThe Donkey in the Well

One day a farmer’s donkey fell down into a well. The animal cried piteously for hours as the farmer tried to figure out what to do. Finally, he decided the animal was old, and the well needed to be covered up anyway; it just wasn’t worth it to retrieve the donkey. He invited all his neighbours to come over and help him. They each grabbed a shovel and began to shovel dirt into the well. At first, the donkey realized what was happening and cried horribly. Then, to everyone’s amazement he quieted down. A few shovel loads later, the farmer finally looked down the well. He was astonished at what he saw. With each shovel of dirt that hit his back, the donkey was doing something amazing. He would shake it off and take a step up. As the farmer’s neighbours continued to shovel dirt on top of the animal, he would shake it off and take a step up. Pretty soon, everyone was amazed as the donkey stepped up over the edge of the well and happily trotted off! MORAL : Life is going to shovel dirt on you, all kinds of dirt. The trick to getting out of the well is to shake it off and take a step up. Each of our troubles is a steppingstone. You can get even out of the deepest well.

November 12, 2013 03:15 AM

November 11, 2013

Tony BilbroughDay 6, Confusion and Confessions

I must be sending the wrong tributes to our Rain Gods. Only 4 mm in the gauge for the whole weekend. Rubbing salt in, Emergency Services sent me 3 SMS warnings that my home was about to be pummelled by hail, and left awash with gurgling drains.

Can’t help wondering if the discussions held between self, and the scrawny, hair-suit rain god, is being transmitted at the same rate as my ISP, Telstra Bigpond Cable.

Meaning that the last message to the afore said rain god was in January this year, and might only just been received.

That message read “slow the bloody rain down, you miserable sod, or the fruit on my grape vines will get a dreaded botrytis”

 Before entering class I paused and reflected on the importance today, beside the fallen Elephant. Remembering…. Remembering the mates we all had, who never quite made it back from our various conflicts.

The eleventh Hour, of the eleventh Day, of the eleventh Month.

Until next year then, my living friends.

  Image

 

Now, this is important too – To day I learned with more than a little dismay, that I had completely misunderstood what I was supposed to do over the weekend.

Teacher said we were to get all the research and information for the final story, work it into something usable, so that the remaining time could be spent making it readable.

Definitely not what I thought I heard, when talking to Gazza about his coffee machine the other night.

 Its now a little after 11pm on Monday evening, I have just caught up with where I was supposed to be, by that time last night.

Back to the Blog of today, and think of all I should have absorbed, for disgorgement to digital word, tonight.

As an aside, I now have a certain empathy for the Goose, that produced the Pate de fois gras I will be consuming at Thursday’s Beefsteak and Burgundy luncheon. I am sure I better understand now, how forced feeding works.

One of the key issues in writing, is getting a story read by a wide audience, and text alone is the least intimate form for getting the message across.

So, how to improve the situation?

Well for start there needs to be a caption, to grab the reader and create a need to read on.

A bit of a Kicker.

The introduction should paint the overall message of the story we are trying to tell – a sort of precis, but interesting, and holding out the main issue, for all to see.

Always try to place a picture, or map, or some visual artefact close to the banner, but do make sure that it does not impinge on other stories close by.

If using quotes, and a good story should have at least three, make sure you have some medium to verify that what you have written, is accurate.

If you are not able to make that recording read what you have written back to the Talent, and have him/her confirm it is correct. While its not the best solution, it’s better that not having a quote at all.

Always do your research for for legal implications, and introduce clearly, any legal matters in the story.

 Use the software Murally, to create the story line with sticky notes – I am a long way from using that right now and feel it may be some another incarnation before any understanding of the usefulness takes hold. Pinned up there with the usefulness of Twitter.

Things not to do when writing

Don’t use an Acronym without first elucidating– I used HUMBUG, in the as yet unpublished story, then went to type out what it stood for [its a computer group I have hung out with for about 15 years] I was stunned when I found I had to go back to the web site to find what the letters really stood for. I like to think that in this particular case I was having a bit of a senior moment [like loosing my car keys for the last 3 days]

Moving right along ….

Do not use slang or jargon unless writing to a niche group [Like the HUMBUG group?]

Remember your audience – hmmmm, mine is very small, and all very polite.

Don’t assume that your audience knows what you have just said or written – I can see a pattern coming up here. I’m no longer sure what I just wrote, either.

 With Interviews to camera

One needs good footage, use cutaways to enhance the story, eye witnesses can be good, even when they are not sure what they have seen? What?

Check sound – you cant get away with poor sound. I have found Truth at last.

 For Elements of an Audio story

Use a good voice for the presenter, get sound grabs of relevant noises, use atmospheric noise if it fits with the story line. Use music to set a mood, but be aware of copyright restrictions

And finally -

Not all stories will have these elements.

 Somewhere about this stage of the lectures, I found that I had left the laptop battery charger at home, and the warning light was blinking. So I made my excuses and bolted for an early train.

The rest of the tale will unfold …. Later, I’m sure

 Good night all.

PS. I’m sure that all will have noticed, that this blog has broken almost ever single guide line we were given today.

Think Images, Links and a Cosmic picture at ‘The End’.


November 11, 2013 02:29 PM

November 10, 2013

Ben MartinRePaper 2.7 inch epaper goodness from the BeagleBone

A little while back I bought a rePaper 2.7 inch eInk display. While the smaller, down to 1.4 inch screens have few enough pixels to be driven from an Arduino, the 264x176 screen should need around 5.5k for a single frame buffer, and you need two buffers to "wax on, wax off" the image on the display in order to update. The short story is that these displays work nicely from the BeagleBone Black. You have to have a fairly recent kernel in order to get the right sys files for the driver. Hint: if you have no "duty" file for your pwm then you have too old of a kernel.

So the first image I chose to display after the epd_test was a capture of fontforge editing Cantarell Regular. Luckily, I've made no changes to the splineset so my design skills are not part of the image. The rendering of splines in the charview of fontforge uses antialiasing, as it was switched over to cairo around a year ago. As the eInk display is monochrome the image displayed is dithered back to 1 bit.

With the real time collaboration support in fontforge this does raise the new chance to see a font being rendered on eInk as you design it (or hint it). I'm not sure how many fonts are being designed with eInk as the specific consumption ground. If you are interested in font design, checkout Crafting Type which uses fontforge to create new type, and you should also be able to see the collaboration and HTML preview modes in action.

Getting the actual eInk display to go from the BeagleBone had a few steps. Firstly, I managed to completely fill up the 2gb of eMMC where my Angstrom was installed. So now I'm running the whole show off a high speed 8gb sandisk card. I spent a little extra cash on a faster card, its one of the extreme super panda + extra adjective sandisk ones. The older kernel I had didn't have a duty file for the PWM pin that the driver wanted to use. Now I that I have a fully updated beaglebone black boot area I have that file. FWIW I'm on kernel version 3.8.13-r23a.49.

Trying out the epd_test initially showed me some broken lines and after a little bit what looked like a bit of the cat from the test image. After rechecking the wireup a few times I looked at the code and saw it was expecting a 2 inch screen. That happens in a few places in the code. So I changed those to reflect my hardware. Then the test loop ran as expected!

The next step was getting the FUSE driver installed (change for size needed too). Then the python demos could run. And thus the photo above was made. My next step is to create a function to render cairo to /dev/epd/display in order to drive the display directly from a cairo app.

A huge thank you to rePaper for making this so simple to get going. The drivers for Raspberry and Beagle are up on their github page. I had been looking at the Arduino driver and it's SPI code thinking about porting that over to Linux, but now that's not necessary! I might design some cape love for this, perhaps with a 14 pin IDC connector on it for eInk attaching. Shouldn't look much worse than last night's SPI only monster, though something etched would be nicer.



The 2.7 inch changes are below, the first one is just slightly more verbose error reporting. You'll also want to set EPD_SIZE=2.7 in /etc/init.d/epd-fuse.

diff --git a/PlatformWithOS/BeagleBone/gpio.c b/PlatformWithOS/BeagleBone/gpio.c
index b3ded6f..d1df3df 100644
--- a/PlatformWithOS/BeagleBone/gpio.c
+++ b/PlatformWithOS/BeagleBone/gpio.c
@@ -767,7 +767,7 @@ static bool PWM_enable(int channel, const char *pin_name) {
                                usleep(10000);
                        }
                        if (pwm[channel].fd < 0) {
-                               fprintf(stderr, "PWM failed to appear\n"); fflush(stderr);
+                               fprintf(stderr, "PWM failed to appear pin:%s file:%s\n", pin_name, pwm[channel]
                                free(pwm[channel].name);
                                pwm[channel].name = NULL;
                                break;  // failed
diff --git a/PlatformWithOS/demo/EPD.py b/PlatformWithOS/demo/EPD.py
index da1ef12..41cc6c1 100644
--- a/PlatformWithOS/demo/EPD.py
+++ b/PlatformWithOS/demo/EPD.py
@@ -48,8 +48,8 @@ to use:

     def __init__(self, *args, **kwargs):
         self._epd_path = '/dev/epd'
-        self._width = 200
-        self._height = 96
+        self._width = 264
+        self._height = 176
         self._panel = 'EPD 2.0'
         self._auto = False

diff --git a/PlatformWithOS/driver-common/epd_test.c b/PlatformWithOS/driver-common/epd_test.c
index e2f2b5a..afe3cb8 100644
--- a/PlatformWithOS/driver-common/epd_test.c
+++ b/PlatformWithOS/driver-common/epd_test.c
@@ -72,7 +72,7 @@ int main(int argc, char *argv[]) {
        GPIO_mode(reset_pin, GPIO_OUTPUT);
        GPIO_mode(busy_pin, GPIO_INPUT);

-       EPD_type *epd = EPD_create(EPD_2_0,
+       EPD_type *epd = EPD_create(EPD_2_7,
                                   panel_on_pin,
                                   border_pin,
                                   discharge_pin,

November 10, 2013 05:29 AM

November 09, 2013

Tony BilbroughDay four, The interview and Interviewee

Gazza Curtis, is geek, a dedicated dabbler in all forms of electronic circuitry, and most importantly for today’s interview, a barista extra ordinaire.

 I am not too sure how to present a conversation on paper – oops, as an electronic, digital, type face – so have used an[*] for the question and [-] for the answer.

So let ‘Understanding one man’s view on coffee’ unfold.

 

 Image

 

* Just what is it you like about your coffee

- Well, let me say quite firmly, I am not a coffee snob, even thought my daughter is. I have always enjoyed coffee over other beverages, and have had many different coffee makers. Two years back I bought the standard Aldi model for about $40, and have stayed with the brand ever since.

The actual sachets work out to only 35c each, so it makes a s reasonably cheap cup. And it tastes great, he added.

*What is the difference between a cup instant and a freshly ground bean coffee?

- well the pod type always tastes fresh because it’s only opened moments before the liquid gets forced thru it. A jar of Instant 42 bean coffee degrades from the moment the jar is opened, and continues to loose its fresh flavour from that moment on. Well, in my view, that is.

*How many different styles of ground coffee bean can you identify

-well I have my preferred style, at the moment my favourite pod is the Expressi Abruzzo. But there are many more styles to choose from. We will be tasting the Tauro and Perugia which are mid strength brews, while the Reggio and Colombia rate a little higher at 8, the Abruzzo is very strongly caffeinated, and rated at 12. At the other end of the scale you could try a Decaf, which is only rated at 2 or a Florenzi at 3.

 

Image

*Do you have to have coffee at the beginning of each day to function normally

- absolutely, I have the same type of coffee maker at work as well! So I know I can always drink a consistent style of coffee.

*Do you think coffee makes one constipated?

- Well it is a diuretic. I have had people tell me it does, but personally never experienced that type of problem. [The rest of this conversation was edited out, in case my Grand children read this blog]

*Which country do you think produces the best tasting coffee bean?

- I think that almost any Arabica beans taste great under certain circumstances, particularly if well roasted. At one end of the scale, the Somali method of roasting the endosperm laden beans in a pan over an open fire will obviously lead to an inconsistency in flavour, but that might well be a part of the character of their particular style, Not that I am biased, but the coffee grown inland from Airlie Beach, and up in the Atherton Tablelands taste quite spectacular. Their production is very limited, so that sort of knowledge is best kept a secret. Oh hell, did I just say that?

*Do you suffer withdrawal symptoms if you go without coffee for more than a day

- not as much as I used to.

*Is it very difficult to operate this particular machine?

- Its really quite simple. There are just 7 points to remember

.make sure there is water in the jug

.turn the power on

.put a mug under the spout and press the flush button, it looks like a shower spray!

.drop your chosen pod into the slot, but make sure it is located correctly.

.now all you have to do is choose if you want a full cup or half, and select that particular button.

.if you want milk, warm the mug and milk in a microwave, before putting under the spout, and you will get a very nice frothy brew.

 

We wander off to talk about visiting Vietnam, quadrocopters, the next Linux conference in Perth, while sipping strong black coffee.

Thank you Gazza

Interview event ends at 22.00 hrs.

 

 Lessons learned from doing my first interview.

 I was surprised that it took only a little time to think about the questions I needed to ask, and even more surprised to find that in general people really do enjoy talking about their interests. Where I have come unstuck on both attempts, is in placing the questions to the Talent in the correct order, so the narrative flowed a little more clearly.

Start the questions gently, and leave the Talent plenty of time to reflect on what they want to say. I noticed that after a moments pause Gazza would add another interesting point to fill out what he meant.

Leave the tough or ‘naughty’ questions to the end, in case the Talent spits the dummy, or worse, looses interest in talking to you.

And Most Importantly of all, treat coffee tasting in the same manner as Wine tasting

Sip and spit.

Last night I drank far too many different styles of coffee, while discussing the character of each pod, so that after getting home around midnight, the will to sleep was long gone and the brain remained buzzing away in front of the TV for several hours, trying to wind down!

Not a great start for the 30km bike ride on Sunday morning!


November 09, 2013 10:01 PM

November 08, 2013

Tony BilbroughDay Three, Channel 10 Television is alive and well

I have discovered that there were a lot of people in our world that actually savour coffee, and experiment with the roasted bean flavours and styles, in much the same way that others do with grape varieties in wine, or hop and wheat in beers.

So this morning on the way to The Edge, I thought to count the number of coffee shops, between Roma Street Station [no photography allowed] and our class, a distance of about one kilometre. Amazed to find 14 shops or stalls, all open for business at 9 am, and all seemed to be able to compete successfully! Good grief, Charlie Brown, what does Guinness know about this.

 Image

 

So made a point of getting in earlier today, to have a Long Black before the learning began.

 

Most of the class day was spent preparing for the weekend assignment – To interview one or two people, then write a story based on the questions asked at the interview.

Now I know all the work so far has been leading up to this, but as the lectures unfolded a vacuum seemed to grow ever bigger inside my head. Been thinking ‘bloody hell, how to find a topic’, where to begin to do research on a topic that is still a vague mist in my mind. Oh Wikki, you and I will be so close tomorrow.

 

So, if I had to summarize my activities today, it would fall into just three parts, or perhaps four sections. Ahh, and a little bit.

 

Spent the morning listening to others input, because I really had nothing at all to contribute.

I have realized I have virtually no concept at all, of how to think questions to do the research, to ask an interviewee [known as 'talent' in the trade].

 

AND

finding out that one needs to learn heaps more about half a dozen more software tools [do not mention them here, save for later]– and I’m not talking about the scant knowledge acquired over the last 5 days on the in-adequateness of WordPress and an unfathomable Twitter. No idea how many of my Twits are still jiggling around in the ether, but few seem to have found there way to their intended targets. Let me clarify the latter. I think that when I have worked out how to use the # and @ I might well have it all sorted. What to use now to disguise letters in naughty words? I ponder.

For the former, this has a long way to go. Each student has somehow managed to create a different version of WordPress, that has differing controls for layout and print style. Or in some cases, none at all.

For instance I can’t get my parchment style background to ‘stick’ and there seems no way to change type styles.

And this sort of thing is vital to indicate our very individual Brand of Blog. Cough cough.

 

AND

Our outing to the Channel 10 TV station at the top of Mt Coot-ha was certainly the most interesting of the three media outlets we have seen so far. We left the building with a feeling that the staff there really enjoyed working together. They were all so accessible, and are the first to  encourage us to continue with the CitizenJ concept, giving us information on how to access to their news system via various the social media they used.

I have completely changed my views on the role Channel 10 the Company, and their cheerful staff, play in our society’s quest for information.

 Image

 

For our 5pm evening break, I headed over to Archives in West End, and dropped down a few well hopped IPA’s, and began writing up the days events, while it was all fresh in my mind.

 

Early evening saw some of us heading a few hundred metres up the road to Avid Reader, for wines and nibbles, and to listen to Emma Carter’s insights at her book launching. The book, ‘Beyond the Logo’ is designed to help small to mid sized companies understand the process of creating and marketing a Brand Image in the correct sequence, and with the right types of graphic design.

Several of the CitizenJ students bought a signed copy

 

Image

 

Photography and Public Relations for the event was handled by Bridget Heinemann. Thank you Bridget, and further to our all too short talk, I know you would enjoy getting involved with CitizenJ

Looking forward to the Humbug over at UQ, tomorrow afternoon, to find suitable ‘Talent’ to Interview, for an as yet unknown Topic. Don’t stop breathing.


November 08, 2013 01:48 PM

November 07, 2013

Tony BilbroughDay Two, is there a story out there?

And it seems like I’ve been here weeks already, and I know my way around the State Library complex like an old timer, and I really don’t get lost at all.

 

Just have to go back to the elephant once more, this time to follow up on the first public comment made to my Blogg on Day One, that there was much more to the fallen statue than first appeared.

I discovered that the million dollar elephant had ended up on its nose because of a Rat. And I have to wonder how many passers by ever knew there really was a Rat out there – but on ‘The Other Side’?

It is a fairly large and handsome rat, at that. A sort of Cane Rat, or Ruttus Rattus, or perhaps even a King Rat, with its very own patch of green, green grass.

 

Image 

 

Now quickly, back to the class room to see what the day brings.

We looked at successful photo journalism styles, uses of slides with audio overlay, straight audio as well as descriptions of an event blending a mix of all three.

 

Today, for our ‘Outing’, we visited The Fairfax Newspaper offices at 420 Adelaide Street, and were shown around the Brisbane operations by Cameron, the Brisbane operations editor.

Even after the ABC visit yesterday, I still find it amazing that so much news is gathered, and later disseminated, by so few journalists. At Fairfax there were a mere handful in the entire Brisbane operation.

Honestly, one got the feeling that there were many more IT personnel at work maintaining and operating the electronic Guts in the building, than all the journalist staff using it! Only a feeling, mind you.

 Image

On our return we split into two teams. Our team worked a topic involving an editor, journalist  and sound man on one side, wanting to get a emotional story out of a bloke, who five hours earlier had reversed his car down the driveway, over their two year old child.

The other team discussed surveillance, and non disclosure by a journalist and editor, to a police official, and a Lawyer acting for an irate mother of a fourteen year old daughter, who had held a Rave? Party while mother and father were away.

 My own worst nightmare had arrived.

It was Role Play time.

No point in summarising how it all unfolded, as you had to be there to catch, and hold tight, to that roller coaster ride of polite discourse.

Suffice to say I am surrounded by a cast of superb word artists, and became enthralled by the verbal sparring that erupted, as each point was put, pushed and pulled.

 And at days half time [it was only 5pm and we all still had many more hours of review work and homework  to do] I stepped away from the train station, and came a upon a huge crime scene right in the middle of our little village.

Dozens of flood lights on stands, people with clip boards, a big thick power cable snaking back to a smelly generator, and a mass of young blokes all in working gear dashing about carrying boxes of gear in and out of four massive trucks. I instantly visualized the investigation set up from tv series like CSI New York, or Body of Evidence.

A biggie right in my own suburban back yard.

 Oh yes. I know, right now, I really really know …….. I have the ultimate ‘up’ on tomorrow nights homework. This I Truly Know. A real scoop for class tomorrow. Almost piddling with excitement. Honest Johnson.

 

Out with the Android to begin surreptitiously clicking away, until I realised that I could be more brazen and take photos of almost anything I wanted from a public foot path – another one of our lessons today.

Only five steps further down the foot path, there was a large sign.

“This Subway branch is closed all day for a National Photo Shoot. We apologise for any inconvenience”.

Oh Bloody Hell, crime story blown away, photos so wasted, and I’ll have to work like all the rest of the team, over the coming weekend.

You see, the essence of yet another of today’s lessons was, “to maintain honesty in Journalism”.


November 07, 2013 11:37 AM

November 06, 2013

Tony BilbroughDay One

What a fascinating course this has turned out to be.

Looking back on the day, I am wondering just how much I have really absorbed, and am guessing that this essay/blogg? will show, it has been written ‘on the fly’, after judging more than a dozen wines in three hours, earlier this evening.

Image

Our days course began with a walk past an elephant statue – perched on its nose.

Not perched as in Norwegian Parrot, rather Perched Precariously, as in it just fell off the back of a truck! I can’t help thinking that Terry Pratchett must have had a hand in it.

Lectures began with an outline of what we could expect to achieve, and moved on with individuals explaining why they wanted to become part of the world wide Citizen Journalists group. I had already decided that reporting on interesting events might cause readers to become involved in a range of different hobbbies/pastimes.

The really interesting part here is that one really needs to discover what ones ‘Brand’ is, in order to create a consistent style. Many hints and descriptors were discussed, and most participants seemed quite able to identify with one or two of the dozen, or so, recognised categories.

I felt a little alarmed and confused here, because a part of each and every one of those descriptors for the ‘Brand’ belonged inside me. That’s not to say I felt I was all, or even many, at any given time – rather that I could easily identify with most all ‘Brands’ at some point in time.

In my case, our networking lunch was used to learn a little more about the vagaries of Twittering, though I’m no closer to seeing where it is applicable. The following weeks will tell, I’m sure. Thank you Elizabeth.

The tour thru the ground floor of the ABC studio/office set up was an eye opener. Technology that I had used in the past has changed so much in the last twenty years, and I really should have been prepared for this. So the overall impression left afterwards was of subdued amazement, though I really don’t know why, as the geek in me recognised the purpose of most all the equipment we were able to look at. Thank you Genevieve for an excellent description of your operation.

Image

The afternoon wrap up was designed to have one look inside ones mind, to see where a particular writing style might lead. But at this point I can only surmise that any style within me, if it does indeed exist, might become a little more apparent as the course progresses

 


November 06, 2013 01:24 PM

Tony Bilbroughwhy?

Why I want to be a journalist?

Never stop learning Allow yourself to stay excited with new technology, and always stay curious.

Espoused by Ian Skidmore, Writer, Journalist, Radio announcer. An all up Great Bloke and friend. 1927 – 2013

So why do I want to better understand the mechanics of Journalism.

Being able to express oneself in a manner that is interesting to an audience, has always fascinated me.

With Toastmasters one meets people who delighted their audience in the spoken tale and might later become successful businessmen, or to a lesser extent, politicians.

The same occurred when making documentaries about the Mining industry. Women who could join together disjointed events of underground operations, or in smelters or refineries, that held a viewing audience in awe, long after leaving the theatre. But writing has remained a short suite. I have wondered for a long time if it were possible to learn the fundamentals of telling a tale in an interesting way, by writing, instead of speaking.

I have followed some great English newspaper writers over the years, and their reporting style always seemed as clear as if they were standing nearby, chatting in a crowded room.

I believe that this short, but intense, course will improve the way I gather the information, to better make a story come alive in the minds of others.

Communication is the cornerstone of all replicating Matter, and Mankind is, of course, an integral part of this process. And in order to communicate fully, one must understand how the communication process works.

Having written a number of training manuals, which used paper, the thought occurred that it would be interesting to move on into the digital age aligned to twitter, Facebook and G+ as Digital News.

Corinda November 2013


November 06, 2013 04:59 AM

November 04, 2013

Tony BilbroughInnocence in digital

 

 

baggins

This blog is a tribute to Ian Skidmore, who in his late 70′s decided to share on his wonderful journalistic skills with all those of a similar age staggering into the digital era. Two weeks ago he died, and with that a great path of discovery ended for Skidmore’s Island.

Now I would like to tread gently into this digital age by recording some of the feelings and events experienced,  as a tribute to the people who have helped shape so many different parts of my life.

As yet I have no idea how to link this Blog to my CitizenJ class, and have just found out that Blogging itself is already on the way to being superseded by other Social Media, such as Twitter, Facebook and G+

Now I must leave this blog and prepare for the course.


November 04, 2013 12:25 AM

October 24, 2013

Ben MartinOpen Logic Sniffing

Since I've started doing a little I2C/SPI work I finally got a hold of the gdb of the wire, a logic sniffer. The poor little 8 pin chip in the below is a small EEPROM which I'm doing a read/write cycle to every second from the BeagleBone Black. The sniffer is an Open Workbench Logic Sniffer which is available for around $50. It was a hard choice between that and the more expensive sniffers because a longer capture buffer is usually a handier capture buffer. Though if there are no artificial delays on the bus then I think the OWLS will probably capture the interesting stuff that is happening.



The first trick was to work out how to use triggers in a simple way. Which for me was if I find a 'read' command (byte=3) on the MOSI then that's a trigger. I've told the bone to clock the SPI back to 5khz and the sniffer to run at 10khz which gives me about 2.46 seconds of capture time. So I will surely see two read/write iterations one second apart.

I haven't worked out how to tell the software to ignore lines 1,2,4,5 so they don't show up as noise. So for now those physical snouts are explicitly grounded. After running the SPI analyser in mode 0 I get the below. The only MISO use is on byte 4 where the old value of 0x1 is read from the EEPROM. The activity on the right is setting write enable (0x6) writing the new value (0x2) and then write disable (0x4). Handy to see that chip select is held and released between those three write related tasks as the enable/disable have to be single byte commands where CS is dropped before they are effective.


The conversation is available in byte format in the analyse window shown below. Here you can see the read value go from 3 to 0 to 1. This is because I triggered on the first read, and the first read got back 3 because that is where I stopped the test program last time (ie, it left 3 at the nominated EEPROM address).


Now to get sigrok to have a sniff too.

October 24, 2013 03:35 AM

October 10, 2013

Blue HackersWorld Mental Health Day

October 10th is World Mental Health Day, a yearly item of awareness on the agenda since 1992. A few links: On this day, I would like to draw your attention to an article (in the Vancouver Sun) this week on Dr Gary Greenberg, about the American Psychiatric Association‘s Diagnostic and Statistical Manual of Mental Disorders (DSM), the leading authority on mental health diagnosis and research. This document is used in the US, (Canada?) and UK for assessment/diagnosis. Dr Greenberg makes the point that in recent times in particular, the number of classified “disorders” has skyrocketed, in general but also in particular in the realm of young children. A small child having a temper tantrum can now be classified as a disorder! This in itself is of course already a problem. Obviously, not diagnosing something is detrimental. But from my perspective, lowering the bar too far and casting the net too wide has the potential to do a great deal of harm to the wellbeing of lots of people. I’d suggest that beyond not being helpful, it’s counterproductive. Dr Greenberg also notes that with DSM regarded as authoritative, and diagnosis increasingly resulting in medication, the problem is exacerbated. When other organisations use DSM diagnoses as a reference point for policy, things go bad. Take for instance the forced medication of children based on ADHD diagnoses – it’s forced because the medication is a prerequisite for schools accepting them. Of course there will be kids with issues that merit some form of support and treatment. But you can see how the aforementioned trail from DSM to school authorities forces the child on medication, even though medication might not be the (most) appropriate avenue. Medicating everything is not the way – life is not a disease, and what’s considered “normal” has a pretty broad spectrum. Demanding narrow conformity and medicating everything outside that boundary is scary. On the other hand, other support mechanisms (including education) hinges on diagnoses as well – so when a threshold is effectively raised, this might remove some people from the medication realm, but it also removes other support. So there it goes wrong again. Complicated matters.

October 10, 2013 12:41 AM

October 06, 2013

Daniel DevineThe Mark of Un-Authenticity

With 3D objects getting increasingly easy to scan and print, and print quality increasing, I think accusations of counterfeiting and unlicensed production are going to start coming thick and fast. Should we start marking 3D printed objects clearly as non-original, different, alternative, un-official to help defend against these claims?

/blog_images/unofficial-mark.png

An "Unofficial" mark.

Read more…

October 06, 2013 04:32 AM

October 03, 2013

Blue HackersCould Diet Sodas be Making You Depressed?

October 03, 2013 10:33 PM

September 26, 2013

Daniel Devinelinux.conf.au 2014 - I'm Going

Last week (or there-abouts) I registered for linux.conf.au 2014, booked flights and accommodation and then gave away literally all my savings to pay for it all. The conference itself is cheap, it's all the extra things that really cost money.

I have a feeling I will particularly enjoy LCA2014 because I'm keen to chase down some people who have some similar interests. Python, private email and federated social networking are on my agenda. I will also get my act together and take part in the keysigning (which I assume will happen) because it's been shown that building a web of trust is now more important than ever.

There are probably some other things I will do in Perth while I am there. Hopefully I can get some savings together and do the extra little things that make the trip worth it.

September 26, 2013 07:37 AM

September 13, 2013

Daniel DevineAnonymous Authentication With Public Key Cryptography

I was looking at BrowserID which is an awesome decentralised authentication system that allows anybody with an email address to authenticate with a single identity. However, what if you don't want the service you are authenticating with to know your email address? What if you want several "single identities"? What if you want to be anonymous?

I've come up with a system which should help separate your personal identity from the account & data.

Read more…

September 13, 2013 02:27 AM

September 05, 2013

Pat NicholsBlack Dog

How to say it? Churchill’s black dog has come home. I know it. No denying it. No I don’t want to talk about it. Why mention it? Admitting it is the first step to overcoming it. Do I need your help? Probably. Will I accept it if offered? Almost certainly not. It’s a personal thing. I’ve beaten it before and will again. Right now is hard. Every day is hard.

Why mention it here? I know it is common, more common than you think. Look across the room. How many people can you see? 10? 20? 30? Think of the statistics. Depressive disorder affects 3.4% of adult men and 6.8% of adult women over any given 12 month period (Australian Bureau of Statistics figures for 1998) . In Australia. With people like us. See those people when you look up? Odds on, one of those people is suffering right now. Silently.

If it is you, don’t give up. I’ve beaten it before and will again. Don’t become a statistic. Don’t seek a permanent solution to a temporary problem. Can you beat it on your own? Maybe. Why risk it? Seek professional help. I’m doing it today.

Stigma kills. How many times can one have a migraine or diarrhea or any other malady? Admit the truth. It makes it easier to bear. Will it affect my promotion chances in the future and potential pay? Probably. That’s the stigma.

I’d sooner get better than promoted anyway.

Yesterday was my 8th year anniversary at my current employer. Management has known of my illness most of that time. In those eight years I've been hospitalised once. And promoted twice.

September 05, 2013 10:57 PM

July 29, 2013

Ben MartinGDrive mounting released!

So version libferris-1.5.18.tar.xz is hot off the make dist; including this much ado about mounting Google Drive support. The last additional feature I decided to add before rolling the tarball was support for viewing and adding to the sharing information of a file. It didn't really do much for me being able to "cp" a file to google://drive without being able to unlock it for given people I know to have access to it. So now you can do that from the filesystem as well.

So, since the previous posts have been about the GDrive API and various snags I ran into along the way, this post is about how you can actually use this stuff.

Firstly run up the ferris-capplet-auth app and select the GDrive tab. I know I should overhaul the UI for this auth tool, but since it's mostly only used once for a web service I haven't found the personal desire to beautify it. So inside the GDrive tab, clicking on the "Authenticate with GDrive" button opens a dialog (should become a wizard), the first thing to do as it tells you is visit the console page on google to enable the GDrive API. Then click or paste the auth link in the dialog to allow libferris to get its hands on your data. The auth link goes to google and tells you what libferris is wanting. When you OK that you are given a "code" that you have to copy and paste back into the lower part of the auth capplet this dialog window. Then OKing the dialog will have libferris get a proper auth token from google and you are all set.

So to get started the below command will list the contents of your GDrive:

$ ferrisls google://drive


To put a file up on there you can do something like;

$ date >/tmp/sample.txt
$ ferriscp /tmp/sample.txt google://drive


And you can get it back with cat if you like. Or ferriscp it somewhere else etc.

$ fcat google://drive/sample.txt
Mon Jul 29 17:21:28 EST 2013


If you want to see your shares for this new sample file use the "shares" extended attribute.

$ fcat -a shares google://drive/sample.txt
write,monkeyiq

The shares attribute is a BINEBO (Bytes In Not Equal Bytes Out). Yay for me coining new terms! This means that what you write to it is not exactly what you will get when you read back from it. The handy part of that is that if you write an email address into the extended attribute, you are adding that person to the list of folks who can write to the file. Because I'm using libferris without FUSE and bash doesn't understand libferris URLs, I have to use ferris-redirect in the below command. You can think of ferris-redirect like the shell redirection (>) but you can also supply the extended attribute to redirect data into with (-a). If I read back the shares extended attribute I'll see a new entry in there. Google will have sent a notification email to my friend with a link to the file for me also.

$ echo niceguy@example.com \
   | ferris-redirect -a shares google://drive/sample.txt
$ fcat -a shares google://drive/sample.txt
write,monkeyiq
write,Really Nice Guy

I could also add some hookup to your "contacts" to this, so your evolution addressbook nick names or google contacts could be used to lookup a person. In this case, with names changed to protect the innocent etc, so hypothetically google thinks the name for that email address is Really Nice Guy because he is in my contacts on gmail.

All of this extends to other virtual filesystem that libferris supports. You can "cp" from your scanner or webcam or a tuple of a database directly to google drive if that floats your boat.

I've already had a bit of a sniff at the dropbox API and others, so you might be able to bounce data between clouds in a future release.

July 29, 2013 07:47 AM

July 27, 2013

Ben MartinThe new google://drive/ URL!

The very short story: libferris can now mount Google Drive as a filesystem. I've placed that in google://drive and will likely make an alias from gdrive:// to that same location so either will work.

The new OAuth 2.0 standard is so much easier to use than the old 1.0 version. In short, after being identified and given the nod once by the user, in 2.0 you have to supply a single secret, in 1.x you have to use per message nonce, create hashes, send the key and token, etc. The main drawback of 2.0 is that you have to use TLS/SSL for each request to protect that single auth token. A small price to pay, as you might well want to protect the entire conversation if you are doing things that require authentication anyway.

A few caveats of the current implementation: mime types on uploaded files are based on file name sniffing. That is because the upload you might be using cp foo.jpg google://drive and the filesystem copies the bytes over. But GDrive needs to know the mimetype for that new File at creation time. The GDrive PATCH method doesn't seem to let you change the mimetype of a file after it has been sent. A better solution will involve the cp code prenotifying the target location so that some metadata (mimetype) can be prefetched form the source file if desired. That would allow full byte sniffing to be used.

Speaking of PATCH, if you change metadata using it, you always get back a 200 response. No matter what. Luckily you also get back a JSON file string with all the metadata for the file you have (tried to) updated. So I've made my PATCH caller code to ignore the HTTP response code compare the returned file JSON to see if the changes actually stuck or not. If a value isn't set how it is expected my PATCH returns an exception. This is in contrast to the docs for the PATCH method which claims that the file JSON is only returned "if successful".

Oh yeah, one other tiny thing about PATCH. If you patch the description it didn't show up in Firefox for me until I refreshed the page. Changing the title does update the Firefox UI automatically. I guess the sidepanel for description hasn't got the funky web notification love yet.

There are two ways I found to read a directory, using files/list and children/list. Unfortunately the later, while returning only the direct children of a folder, also only returns a few pieces of information for those children the most interesting being the child's id. On the other hand the files/list gives you almost all the metadata for each returned File. So on a slower link, one doesn't need thinking music to work out if one round trip or two are the desired number. The files/list also returns metadata for files that have been deleted, and files which other's have shared with you. It is easy to set a query "hidden = false and trashed = false" for files/list to not return those dead files. Filtering on the server exclusively for files that you own is harder. There is a query alias sharedWithMe but no OwnedByMe to return the counter set. I guess perhaps "not sharedWithMe" would == OwnedByMe.

Currently I sort of ignore the directory hierarchy that files/list returns. So all your drive files are just in google://drive/ instead of subdirs as appropriate. I might leave that restriction in the first release. It's not hard to remove, but I've been focusing on upload, download, and metadata change.

Creating files, updating metadata, and downloading files from GDrive all work and will be available in the next libferris release. I have one other issue to cleanup (rate limiting directory read) before I do the first libferris release with gdrive mounting.

Oh and big trap #2 for the young players. To actually *use* libferris on gdrive after you have done the OAuth 2.0 "yep, libferris can have access" you have to go to code.google.com/apis/console and enable drive API for your account otherwise you get access denied errors for all. And once you goto the console and do that, you'll have to OAuth again to get a valid token.

A huge thank you for those two contributed to the ferris fund raising after my last post proposing mounting Google Drive!

July 27, 2013 10:21 AM

July 23, 2013

Ben MartinMounting Google Drive?

So on the heels of resurrecting and expanding the support for mounting vimeo as a filesystem using libferris I started digging into mounting Google Drive. As is normally the case for these things, the plan is to start out with listing files, then uploading files, then downloading files, then updating the metadata for files, then rename, then delete, and with funky stuff like "tail -f" and append instead of truncate on upload.

One plus of all this is that the index & search in libferris will then extend it's claws to GDrive as well as desktop files. As I&S is built on top of the virtual filesystem and uses the virtual filesystem to return search results.

For those digging around maybe looking to do the same thing, see the oauth page for desktop apps, and the meat seems to be in the Files API section. Reading over some of the API, the docs are not too bad. The files.watch call is going to take some testing to work out what is actually going on there. I would like to use the watch call is for implementing "tail -f" semantics on the client. Which is in turn most useful with open(append) support. The later I'm still tracking down in the API docs, if it is even possible. PUT seems to update all the file, and PATCH seems very oriented towards doing partial metadata updates.

The trick that libferris uses of exposing the file content through the metadata interface seems to be less used by other tools. With libferris, using fcat and the -a option to select an extended attribute, you can see the value of that extended attribute. The content extended attribute is just the file's content :)

$ date > df.txt
$ fcat -a name df.txt
df.txt
 $ fcat -a mtime-display df.txt
13 Jul 23 16:33
$ fcat -a content df.txt
Tue Jul 23 16:33:51 EST 2013

Of course you can leave out the "-a content" part to get the same effect, but anything that is wanting to work on an extended attribute will also implicitly be able to work on the file's byte content as well with this mechanism.

If anyone is interested in hacking on this stuff (: good ;) patches accepted. Conversely if you would like to be able to use a 'cp' like tool to put and get files to gdrive you might consider contributing to the ferris fund raising. It's amazing how much time these Web APIs mop up in order to be used. It can be a fun game trying to second guess what the server wants to see, but it can also be frustrating at times. One gets very used to being able to see the source code on the other side of the API call, and that is taken away with these Web thingies.

Libferris is available for Debian Hard Float and Debian armel soft floating point. I've just recently used the armhf to install ferris on an OMAP5 board. I also have a build for the Nokia N9 and will update my Open Build Service Project to roll fresh rpms for Fedora at some stage. The public OBS desktop targets have fallen a bit behind the ARM builds because I tend to develop on and thus build from source on desktop.

July 23, 2013 06:55 AM

July 21, 2013

Ben MartinLike a Bird on a Wire(shark)...

Over recent years, libferris has been using Qt to mount some Web stuff as a filesystem. I have a subclass of QIODevice which acts as an intermediary to allow one to write to a std::ostream and stream that data to the Web, over a POST for example. For those interested, that code is in Ferris/FerrisQt.cpp of the tarball. It's a bit of a shame that Qt heavy web code isn't in KIO or that the two virtual filesystems are not closer linked, but I digress.

I noticed a little while ago that cp to vimeo://upload didn't work anymore. I had earmarked that for fixing and recently got around to making that happen. It's always fun interacting with these Web APIs. Over the time I've found that Flickr sets the bar for well documented APIs that you can start to use if you have any clue about making GET and POST etc. At one stage google had documented their API in a way that you could never use it. I guess they have fixed that by now, but it did sort out the pretenders from those two could at least sniff HTTP and were determined to win. The vimeo documentation IIRC wasn't too bad when I added support to upload, but the docs have taken a turn for the worst it seems. Oh, one fun tip for the young players, when one API call says "great, thanks, well done, I've accepted your call" and then a subsequent one says "oh, a strange error has happened", you might like to assume that the previous call might not have been so great after all.

So I started tinkering around, adding oauth to the vimeo signup, and getting the getTicket call to work. Having getTicket working meant that my oauth signed call was accepted too. I then was then faced with the upload of the core data (which is normally done with a rather complex streaming POST), and the final I'm done, make it available call. On vimeo that last call seems to be two calls now, first a VerifyChunks call and then a Complete call.

So, first things first. To upload you call getTicket which gives you an endpoint that is an HTTP URL to send the actual video data to, as well as an upload ticket to identify the session. If you try to post to that endpoint URL and the POST converts the CGI parameters using multipart/form-data with boundaries into individual Content-Disposition: form-data elements, you loose. You have to have the ticket_id in the URL after the POST text in order to upload. One little trap.

So then I found that verifyChunks was returning Error 709 Access to the chunk list failed. And that was after the upload had been replied to with "OK. Thanks for the upload.". Oddly, I also noticed that the upload of video data would hang from time to time. So I let the shark out of the pen again, and found that vimeo would return it's "yep were done, all is well" response to the HTTP POST call at about 38-42kb into the data. Not so great.

Mangling the vimeo.php test they supply to upload with my oauth and libferris credentials I found that the POST had a header Expect: 100-continue. Right after the headers were sent vimeo gave the nod to continue, and then the POST body was sent. I assume that just ploughing through and giving the headers followed by the body confused the server end and thus it just said "yep, ok, thanks for the upload" and dropped the line. Then of course forgot the ticket_id because there was no data for it, so the verifyChunks got no chunk list and returned the strange error it did. mmm, hindsight!

So I ended up converting from the POST the newly available PUT method for upload. They call that their "streaming API" even though you can of course stream to a POST endpoint. You just need to frame the parameters and add the MIME tailer to the POST if you want to stream a large file that way. Using PUT I was then able to verify my chunks (or the one single chunk in fact) and the upload complete method worked again.

In the end I've added oauth to my vimeo mounting, many thanks to the creators of the QOAuth library!

July 21, 2013 12:10 AM

June 08, 2013

Ben MartinBeagleBone Black: Walking the dog.

My software guy with a soldering iron fun has recently extended to the BeagleBone Black. This is a wonderful little ARM machine with a 1Ghz CPU, a whole bunch of GPIO pins, I2C, SPI, AIN.. all the fun things packed into a $45 board.



On an unrelated purchase, I got a small 1.8 inch TFT display that can do 128x160 with a bunch of colours using the st7735 chip. That's shown above running the qtdemo on the framebuffer. Of course, an animation might serve to better show that off. The display was on sale for $10 and so it was then on it's way to me :) My original plan was to drive that from an Arduino... Looking around I noticed that Matt Porter had generously contributed a driver to run the st7735 over SPI from the Linux kernel. The video of him talking at ELC about this framebuffer driver was also very informative :) It seems the same TFT can be run from the Raspberry or Beagle series of hardware.

The wiring for the panel I got was a bit different than the adafruit one that Matt used. But once you have the pinouts its not so hard to figure out. I've currently left the 5V rail unconnected on my TFT. On the BeagleBone Black the HDMI output captures a whole bunch of pins when it starts. Unfortunately some of those pins are needed for the little TFT. One might be able to reroute the SPI to the other bus or mux the pins differently to get around that and have HDMI and the TFT at once. But I wanted to get the TFT going to see if/how it worked before changing the pins.

I had found some info on putting a line in eEnv.txt to stop the HDMI cape from loading but that didn't work for me. On my board I saw that in /sys/devices/bone_capemgr.9/slots the HDMI was the 5th cape. When I first echoed "-5" into the slots file to unload that cape the kernel gave a backtrace. If I did the same on a freshly booted bone it would cleanly remove the HDMI cape though. So something was using the HDMI cape driver before which didn't want to be removed.

With the HDMI cape unloaded the next step is to load a "firmware" file that reserves the pins that the st7735fb driver wants to use. Since I used the same pins on the bone as the adafruit display wants I could just use the below.

echo cape-bone-adafruit-lcd-00A0  >  /sys/devices/bone_capemgr.9/slots

A dmesg showed that a new framebuffer device fb0 had come into existence.

[   85.280471] bone-capemgr bone_capemgr.9: slot #6: Requesting firmware 'cape-bone-adafru-00A0.dtbo' for board-name 'Override Board Name', version '00A0'
[   85.284645] bone-capemgr bone_capemgr.9: slot #6: dtbo 'cape-bone-adafru-00A0.dtbo' loaded; converting to live tree
...
[   86.235178] fb0: ST7735 frame buffer device,
[   86.235178]  using 40960 KiB of video memory
[   86.236687] bone-capemgr bone_capemgr.9: slot #6: Applied #5 overlays.

After a bunch of searching around trying various things, I found that prescaling in mplayer can display to the framebuffer:

# mplayer -ao null  -vo fbdev2:/dev/fb0 -x 128 -y 160 -zoom  ArduSat_Open_Source_in_orbit.mp4

The qtdemo also runs "ok" by executing the below. I say ok because it obviously expects a higher resolution display than 128x160.... qtdemoE -qws

It is tempting to have two screens and add a touch sensitive film to them. With a QML/QtQuick/TodaysRebrand^TM interface the GUI should work well and be flickable to many screens.

A great hack I look forward to is running a 32x16 LED DMD using a deferred rendering framebuffer driver like the st7735fb does. I see the evil plan now, release the BeagleBone Black for $45 and draw more C/C++ programmers to being kernel hackers rather than userland ones :)

June 08, 2013 03:18 AM

May 29, 2013

Ben MartinFontForge: Rounding out the platforms for binary distrubution

Earlier this year I made it simple to install FontForge on OSX. The process boiled down to expanding a zip file into /Applications. The libraries that fontforge uses have been all tinkered to work from inside the package, and the configuration files and other dynamically opened resources and theme are sought in the right place too.

Now after another stint I have FontForge running under 32bit Windows 7. So finally I had a use for that other OS sitting on my laptop for all this time ;) The first time I got it to run it looked like below. I created a silly glyph to make sure that bezier editing was responsive...


The plan is to have the theme in use so nice modern fonts are used in the menu, and other expected tweaks before making it a simple thing to install on Windows.

One, IMHO, very cool thing I did to get all this happening was to use the OpenSUSE Build System (OBS) to make the binaries. There are some DLL and header file drops for X floating around, but I tend to like to know where libraries that are being linked into the program have come from. Call me old fashioned. So in the process I cross compiled chunks of X Window for Windows on the OBS servers. My OBS win32 support repository contains these needed libraries, right through cairo and pango using the Xft backends to render.

There is a major a schism there: if you are porting a native GTK+2 application over to win32, then you will naturally want to use the win32 backends to cairo et al and have a more native win32 outcome. For FontForge however, the program wants to use the native X Window APIs and the pango xft backend. So you need to be sure that you can render text to an X Window using pango's xft backend to make your life simpler. That is what the pangotest project I created does, just put "hello world" on an X Window using pango-xft.

A big thanks to Keith Packard who provided encouragement at LCA earlier this year that my crazy cross compile on OBS plan should work. I had a great moment when I got xeyes to run, thinking that things might turn out well after the hours and hours trying to cross compile the right collection of X libraries.

I should also mention that I'm looking for a bit of freelance hacking again. So if you have an app you want to also run on OSX/Windows then I might be the guy to make that happen! :) Or if you have cool C/C++ work and are looking to expand your team then feel free to email me.

May 29, 2013 10:01 AM

May 28, 2013

Anthony TownsParental Leave

Two posts in one month! Woah!

A couple of weeks ago there was a flurry of stuff about the Liberal party’s Parental Leave policy (viz: 26 weeks at 100% of your wage, paid out of the general tax pool rather than by your employer, up to $150k), mostly due to a coalition backbencher coming out against it in the press (I’m sorry, I mean, due to “an internal revolt”, against a policy “detested by many in the Coalition”). Anyway, I haven’t had much cause to give it any thought beforehand — it’s been a policy since the 2010 election I think; but it seems like it might have some interesting consequences, beyond just being more money to a particular interest group.

In particular, one of the things that doesn’t seem to me to get enough play in the whole “women are underpaid” part of the ongoing feminist, women-in-the-workforce revolution, is how much both the physical demands of pregnancy and being a primary caregiver justifiably diminish the contributions someone can make in a career. That shouldn’t count just the direct factors (being physically unable to work for a few weeks around birth, and taking a year or five off from working to take care of one or more toddlers, eg), but the less direct ones like being less able to commit to being available for multi-year projects or similar. There’s also probably some impact from the cross-over between training for your career and the best years to get pregnant — if you’re not going to get pregnant, you just finish school, start working, get more experience, and get paid more in accordance with your skills and experience (in theory, etc). If you are going to get pregnant, you finish school, start working, get some experience, drop out of the workforce, watch your skills/experience become out of date, then have to work out how to start again, at a correspondingly lower wage — or just choose a relatively low skill industry in the first place, and accept the lower pay that goes along with that.

I don’t think either the baby bonus or the current Australian parental leave scheme has any affect on that, but I wonder if the Liberal’s Parental Leave scheme might.

There’s three directions in which it might make a difference, I think.

One is for women going back to work. Currently, unless your employer is more generous, you have a baby, take 16 weeks of maternity leave, and get given the minimum wage by the government. If that turns out to work for you, it’s a relatively easy decision to decide to continue being a stay at home mum, and drop out of the workforce for a while: all you lose is the minimum wage, so it’s not a much further step down. On the other hand, after spending half a year at your full wage, taking care of your new child full-time, it seems a much easier decision to go back to work than to be a full-time mum; otherwise you’ll have to deal with a potentially much lower family income at a time when you really could choose to go back to work. Of course, it might work out that daycare is too expensive, or that the cut in income is worth the benefits of a stay at home mum, but I’d expect to see a notable pickup in new mothers returning to the workforce around six months after giving birth anyway. That in turn ought to keep women’s skills more current, and correspondingly lift wages.

Another is for employers dealing with hiring women who might end up having kids. Dealing with the prospect of a likely six-month unpaid sabbatical seems a lot easier than dealing with a valued employee quitting the workforce entirely on its own, but it seems to me like having, essentially, nationally guaranteed salary insurance in the event of pregnancy would make it workable for the employee to simply quit, and just look for a new job in six month’s time. And dealing with the prospect of an employee quitting seems like something employers should expect to have to deal with whoever they hire anyway. Women in their 20s and 30s would still have the disadvantage that they’d be more likely to “quit” or “take a sabbatical” than men of the same age and skillset, but I’m not actually sure it would be much more likely in that age bracket. So I think there’s a good chance there’d be a notable improvement here too, perhaps even to the point of practical equality.

Finally, and possibly most interestingly, there’s the impact on women’s expectations themselves. One is that if you expect to be a mum “real soon now”, you might not be pushing too hard on your career, on the basis that you’re about to give it up (even if only temporarily) anyway. So, not worrying about pushing for pay rises, not looking for a better job, etc. It might turn out to be a mistake, if you end up not finding the right guy, or not being able to get pregnant, or something else, but it’s not a bad decision if you meet your expectations: all that effort on your career for just a few weeks pay off and then you’re on minimum wage and staying home all day. But with a payment based on your salary, the effort put into your career at least gives you six month’s worth of return during motherhood, so it becomes at least a plausible investment whether or not you actually become a mum “real soon now” or not.

According to the 2010 tax return stats I used for my previous post, the gender gap is pretty significant: there’s almost 20% less women working (4 million versus 5 million), and the average working woman’s income is more than 25% less than the average working man’s ($52,600 versus $71,500). I’m sure there are better ways to do the statistics, etc, but just on those figures, if the female portion of the workforce was as skilled and valued as the male portion, you’d get a $77 billion dollar increase in GDP — if you take 34% as the proportion of that that the government takes, it would be a $26 billion improvement to the budget bottom line. That, of course, assumes that women would end up no more or less likely to work part time jobs than men currently are; that seems unlikely to me — I suspect the best that you’d get is that fathers would become more likely to work part-time and mothers less likely, until they hit about the same level. But that would result in a lower increase in GDP. Based on the above arguments, there would be increase the number of women in the workforce as well, though that would get into confusing tradeoffs pretty quickly — how many families would decide that a working mum and stay at home dad made more sense than a stay at home mum and working dad, or a two income family; how many jobs would be daycare jobs (counted as GDP) in place of formerly stay at home mums (not counted as GDP, despite providing similar value, but not taxed either), etc.

I’m somewhat surprised I haven’t seen any support for the coalition’s plans along these lines anywhere. Not entirely surprised, because it’s the sort of argument that you’d make from the left — either as a feminist, anti-traditional-values, anti-stay-at-home-mum plot for a new progressive genderblind society; or from a pure technocratic economic point-of-view; and I don’t think I’ve yet seen anyone with lefty views say anything that might be construed as being supportive of Tony Abbott… But I would’ve thought someone on the right Bolt or Albrechtsen or Australia’s leading libertarian and centre-right blog or the Liberal party’s policy paper might have canvassed some of the other possible pros to the idea rather than just worrying about the benefits to the recipients, and how it gets paid for. In particular, the argument for any sort of payment like this shouldn’t be about whether it’s needed/wanted by the recipient, but how it benefits the rest of society. Anyway.

May 28, 2013 04:01 PM

May 19, 2013

Ben MartinSome amateur electronics: hand made 8x8 LED matrix

So I made an 8x8 matrix of LEDs in a common cathode arrangement. Only one column is ever on at any time, but they cycle from left to right so quick that you and your camera can't get to see that little artefact. This does save on power though so the whole layer can be run directly from the arduino LeoStick in the top right of picture. Thanks again to Freetronics for giving those little gems away at LCA!

<iframe allowfullscreen="" frameborder="0" height="281" mozallowfullscreen="" src="http://player.vimeo.com/video/66445147" webkitallowfullscreen="" width="500"></iframe>
8x8 LED matrix, two 595 shifties and a ULN2003 current sink from Ben Martin on Vimeo.

The LEDs to light on a row are selected by a 595 shift register providing power for each row. The resistors are on the far right of the grid leading to that shift register. The cathodes for each individual column are connected together leading to the top of the grid (as seen in the video). Those head over to a uln2003 current sink IC. In the future I'll use either two 2003 chips or one single 2803 (which can do all 8 columns at once) to get the first column to light up too.

The uln2003 is itself controlled by supplying power to the opposite side to select which column's cathodes will be grounded at any given moment. The control of the uln2003 is also done by a 595 shift register which is connected to the row shifty too. The joy of all this is you can pump in the new state and latch the shift registers at once to apply the new row pattern and select which column is lit.

The joy of this design is that I can add 8x8 layers on top at the cost of 8 resistors and one 595 to perform row select.

There are also some still images of the array if you're peaked.

The 595 chips can be had for around 40c a pop and the uln2003 for about 30c. LEDs in quantity 500+ go at around 5-7c a pop.

The code is fairly unimaginative, mainly to see how well the column select works and how detectable it is. In the future I should setup a "framebuffer" to run the show and have a timer refresh the array automatically...

#define DATA   6
#define LATCH  8
#define CLOCK 10  // digital 10 to pin 11 on the 74HC595

void setup()
{
  pinMode(LATCH, OUTPUT);
  pinMode(CLOCK, OUTPUT);
  pinMode(DATA, OUTPUT);
}

void loop()
{
  int i;
  for ( i = 0; i < 256; i++ )
  {
    int col = 1;
    for( col = 1; col < 256; col <<= 1 )
    {
      digitalWrite(LATCH, LOW);
      shiftOut(DATA, CLOCK, MSBFIRST, col );
      shiftOut(DATA, CLOCK, MSBFIRST, i   );
      digitalWrite(LATCH, HIGH);
    }

    delay(20);
  }
}




May 19, 2013 11:59 AM


Last updated: April 18, 2014 06:15 PM. Contact Humbug Admin with problems.