HUMBUG logo


HUMBUGers

Feed: rss rdf opml

August 28, 2017

Blue HackersCreative kid’s piano + vocal composition

An inspirational song from an Australian youngster.  He explains his background at the start.

<iframe allowfullscreen="allowfullscreen" frameborder="0" height="315" src="https://www.youtube-nocookie.com/embed/g-YuhR98mrw?rel=0&amp;controls=0" width="560"></iframe>

August 28, 2017 12:01 AM

August 24, 2017

Blue HackersMental Health Resources for New Dads

Right now, one in seven new fathers experiences high levels of psychological distress and as many as one in ten experience depression or anxiety. Often distressed fathers remain unidentified and unsupported due to both a reluctance to seek help for themselves and low levels of community understanding that the transition to parenthood is a difficult period for fathers, as well as mothers.

The project is hoping to both increase understanding of stress and distress in new fathers and encourage new fathers to take action to manage their mental health.

This work is being informed by research commissioned by beyondblue into men’s experiences of psychological distress in the perinatal period.

Informed by the findings of the Healthy Dads research, three projects are underway to provide men with the knowledge, tools and support to stay resilient during the transition to fatherhood.

https://www.medicalert.org.au/news-and-resources/becoming-a-healthy-dad

August 24, 2017 12:41 AM

August 22, 2017

Blue HackersThe Attention Economy

In May 2017, James Williams, a former Google employee and doctoral candidate researching design ethics at Oxford University, won the inaugural Nine Dots Prize.

James argues that digital technologies privilege our impulses over our intentions, and are gradually diminishing our ability to engage with the issues we most care about.

Possibly a neat followup on our earlier post on “busy-ness“.

<iframe allowfullscreen="allowfullscreen" frameborder="0" height="315" src="https://www.youtube-nocookie.com/embed/xxyRf3hfRXg?rel=0" width="560"></iframe>

August 22, 2017 06:35 AM

August 20, 2017

Ben MartinCNC Z Axis with 150mm or more of travel

Many of the hobby priced CNC machines have limited Z Axis movement. This coupled with limited clearance on the gantry force a limited number of options for work fixtures. For example, it is very unlikely that there will be clearance for a vice on the cutting bed of a cheap machine.

I started tinkering around with a Z Axis assembly which offers around 150mm of travel. The assembly also uses bearing blocks that should help overcome the tensions that drilling and cutting can offer.


The assembly is designed to be as thin as possible. The spindle mount is a little wider which allows easy bolting onto the spindle mount plate which attaches to these bearings and drive nut. The width of the assembly is important because it will limit the travel in the Y axis if it can interact with the gantry in any way.

Construction is mainly done in 1/4 and 1/2 inch 6061 alloy. The black bracket at the bottom is steel. This seemed like a reasonable choice since that bracket was going to be key to holding the weight and attachment to the gantry.

The Z axis shown above needs to be combined with a gantry height extension when attaching to a hobby CNC to be really effective. Using a longer travel Z axis like this would allow higher gantries which combined allow for easier fixturing and also pave the way for a 4/5th axis to fit under the cutter.

August 20, 2017 10:56 PM

August 13, 2017

Adrian SuttonMoolah Diaries – Automating Deployment from Travis CI

Thanks to Oncle Tom’s “SSH deploys with Travis CI“, I’ve now fully automated Moolah deployment for both the client and server.

Previously the client was automated using a cronjob that pulled any changes from GitHub, built and ran the tests and if that all worked resync’d the built resources to apache’s docroot. That meant my not-particularly-powerful server was duplicating the build and test steps already happening in Travis CI and there was up to a 10 minute delay before changes went live.

The server didn’t have automated deployments because it has database tests that need an actual database to test against.  I’m crazy, but not crazy enough to run database tests anywhere near real production data.

Now with Travis CI trigging the deployment after the tests have already passed everything is triggered automatically from a git push and it’s a nicely isolated server running all the tests.

I’ve deliberately split the deployment in two – committed to the Moolah codebase is just enough to push the artefacts over to the server and trigger it’s deployment script. The deployment script on the server is managed as part of it’s puppet configuration and controls where things are finally deployed to, manages database migrations etc. That gives a nice clear delineation between the development of the service and the details of how it’s deployed in this particular environment and a simple, clear interface between the two. Now I can change how things work in either the code or the server setup without thinking too much about the other half.

August 13, 2017 03:23 AM

August 11, 2017

Adrian SuttonMoolah Diaries – The Moment Dilemma

moment is one of those javascript libraries that just breaks my heart. It does so much right – nearly every date function you’d want right at your finger tips, no major gotchas causing bugs, clean API and good docs. I’ve used moment in some pretty hard code, very date/time orientated apps and been very happy.

But it breaks my heart because when you pull it in via webpack, it winds up being a giant monolithic blob. I only need a couple of date functions but with moment I wind up getting every possible date function you can imagine plus translations into a huge range of languages I don’t need. If I use moment, about 80% of Moolah’s client javascript is moment.

So I was quite pleased to find date-fns which also provides every date function you can imagine but is specifically design to be modular so you only pull in what you actually need. Works a treat, so Moolah uses that for the date operations it needs and it barely adds any size at all.

Except then when I went to use a graph library, it turns out that Chart.js which I was keen to try depends on moment. Sigh.

August 11, 2017 11:48 AM

August 10, 2017

Ben MartinLarger format CNC

Having access to a wood cutting CNC machine that can do a full sheet of plywood at once has led me to an initial project for a large sconce stand. The sconce is 210mm square at the base and the DAR ash I used was 140mm across. This lead to the four edge grain glue ups in the middle of the stand.


The design was created in Fusion 360 by just seeing what might look good. Unfortunately the sketch export as DXF presented some issues on the import side. This was part of why a littler project like this was a good first choice rather than a more complex whole sheet of ply.

To get around the DXF issue the tip was to select a face of a body and create a sketch from that face. Then export the created sketch as DXF which seemed to work much better. I don't know what I had in the original sketch that I created the body from that the DXF export/import didn't like. Maybe the dimensions, maybe the guide lines, hard to know without a bisect. The CNC was using the EnRoute software, so I had to work out how to bounce things from Fusion over to EnRoute and then get some help to reCAM things on that side and setup tabs et al.

One tip for others would be to use the DAR timber to form a glue up before arriving at a facility with a larger cut surface. Fewer pieces means less tabs/bridges and easier reCAM. A preformed blue panel would also have let me used more advanced designs such as n and u slots to connect two pieces instead of edge grains to connect four.

Overall it was a fun build and the owner of the sconce will love having it slightly off the table top so it can more easily be seen.

August 10, 2017 12:06 PM

July 31, 2017

Tim KentSolution to Error 0x80070714 when attempting to upgrade to Windows 10 version 1703 (Creators Update)

I was attempting to patch a Windows 10 Pro machine from version 1607 to 1703 (Creators Update), however the process kept failing with Error 0x80070714:

Feature update to Windows 10, version 1703 - Error 0x80070714

The solution was to stop the MSSQLSERVER service before kicking off the update:

Right-click the Start button (or press Windows+X) and choose "Command Prompt (Admin)" then type the following:

C:\WINDOWS\system32>net stop MSSQLSERVER
The SQL Server (MSSQLSERVER) service is stopping.
The SQL Server (MSSQLSERVER) service was stopped successfully.


Once the machine reboots after the update the service will be running again, so this shouldn't do any harm.

You may have other MSSQL instances with different service names, the same process applies.

July 31, 2017 11:59 PM

Adrian SuttonMoolah Diaries – Finding Transaction in the Past Month in MySQL

Ultimately the point of Moolah isn’t just to record a bunch of transactions, it’s to provide some insight into how your finances are going. The key question being how much more or less are we spending than we earn? I use a few simple bits of analysis to answer that question in Moolah, the first of which is income vs expenses over the past x months.

The simple way to do that is to show totals based on calendar month (month to date for the current month), but since my salary is paid monthly that doesn’t provide a very useful view of the current month since no income has arrived yet.

I really want to see data for the previous month as a sliding window.  So on the 3rd July the last month would be 4 June to 3 July (inclusive) and on the 25th of July it would be 26 June to 25 July. Given the way salary and bills are paid each month that provides a fairly stable view of income vs expense without fluctuating too much due to the time of the month.

Previously that was easy enough because all the transactions were in memory and we were iterating in JavaScript – flexible but doesn’t scale particularly well. With Moolah 2 we really want to do that in the database and avoid loading all the transactions.

The key bit of sql that makes it possible to achieve this sliding window for previous month is the group by clause:

GROUP BY IF(DAYOFMONTH(date) > DAYOFMONTH(NOW()), 
EXTRACT(YEAR_MONTH FROM DATE_ADD(date, INTERVAL 1 MONTH)),
EXTRACT(YEAR_MONTH FROM date))

There’s nothing particularly magic here – we just decide whether to push a transaction date into the next month based on whether it’s day of month is past the current day of month. The full transaction is in analysisDao.js.

Initially I limited the query to just the last 12 months worth but MySQL can iterate so much faster than JavaScript it can easily run through the full data set covering about 6 years. I probably should put some limit on it but I’m interested in how long it will last.

July 31, 2017 04:44 AM

July 22, 2017

Adrian SuttonMoolah Diaries – Data Parity

Moolah just reached it’s first key milestone – it’s reached “data parity” with the old version. Basically it has enough functionality that it’s viable to import the data from the old version and get the same balances and totals. Using real data that’s been generated over the last 5 years has revealed a few important things.

The good:

The bad:

The next major milestone will be when I can actually switch over from the old system. Hopefully that’s not too far off.

July 22, 2017 10:01 AM

July 21, 2017

Ben Fowler

July 21, 2017 09:04 PM

July 18, 2017

Adrian SuttonMoolah Diaries – Tracking Account Balances

Moolah has two separate Vuex modules, transactions and accounts. That’s a fairly reasonably logical separation, but in the new Moolah, it’s not a complete separation. Moolah only ever loads a subset of transactions – typically the most recent transactions from the currently selected account.  As a result, accounts have their own current balance property because they can’t always calculate it off of their transactions.

That means that sometimes the transaction Vuex module needs to make changes to state owned by the account module which is a rather unpleasant breach of module separation. While we ensured transaction balances remained consistent inside each mutation, we can’t do that for account balances because Vuex (presumably deliberately) makes it impossible for a mutation to access or change state outside of the module itself. Instead, I’ve used a simple Vuex plugin that watches the balance on the most recent transaction and notifies the account module when it changes:

import {mutations as accountsMutations} from './accountsStore';
export default store => store.watch(
    (state) => {
        return state.transactions.transactions.length > 0 
? state.transactions.transactions[0].balance
: undefined; }, (newValue, oldValue) => { if (store.getters["accounts/selectedAccount"] !== undefined &&
newValue !== undefined) { store.commit('accounts/' + accountsMutations.updateAccount, { id: store.state.selectedAccountId,
patch: {balance: newValue} }); } });

Most of the complexity there is dealing with the case where we don’t yet have transaction or don’t have a selected account. This essentially makes the syncing between transaction balances and account balances a separate responsibility that belongs to this plugin.

The plugin doesn’t completely handle transfers between accounts. The balance for the account currently being edited updates correctly, but the balance for the account on the other side of the transfer doesn’t update. I’m not particularly happy with the solution, but at least for now the responsibility of reacting to transfer changes is in the transaction module’s actions. The action can easily calculate the effects of transaction changes on the balances of other accounts and use Vuex’s dispatch to send a notification over to the account module to perform the update. Mutations can’t dispatch events so it has to be the action that does it.

This leaves responsibility for updating the current account’s balance with the plugin and the responsibility for updating other account’s balances with the transaction module which is ugly. Perhaps the transaction module actions should be responsible for all account balances. Probably the right answer is to extract a separate class/function that the transaction module notifies at the end of each action with the state of the transaction before and after. Then that new class/function is responsible for updating account balances. Will need to prod the code a bit to see if that really can be done…

 

 

July 18, 2017 09:18 PM

July 13, 2017

Anthony TownsBitcoin: ASICBoost – Plausible or not?

So the first question: is ASICBoost use plausible in the real world?

There are plenty of claims that it’s not:

A lot of these claims don’t actually match reality though: ASICBoost is implemented in Bitmain miners sold to the public, and since it defaults to off, a switch to hide it is obviously easily possible since it’s disabled by default, contradicting Sam Cole’s take. There’s plenty of circumstantial evidence of ASICBoost-related transaction structuring in blocks, contradicting the basis on which Emin Gun Sirer’s dismisses the claims. The 15%-30% improvement claims that Guy Corem and Sam Cole cite are certainly large enough to be worth looking into — and  Bitmain confirms to have done on testnet. Even Guy Corem’s claim that they only amount to $2,000,000 in savings per year rather than $100,000,000 seems like a reason to expect it to be in use, rather than so little that you wouldn’t bother.

If ASICBoost weren’t in use on mainnet it would probably be relatively straightforward to prove that: Bitmain could publish the benchmarks results they got when testing on testnet, and why that proved not to be worth doing on mainnet, and provide instructions for their customers on how to reproduce their results, for instance. Or Bitmain and others could support efforts to block ASICBoost from being used on mainnet, to ensure no one else uses it, for the greater good of the network — if, as they claim, they’re already not using it, this would come at no cost to them.

To me, much of the rhetoric that’s being passed around seems to be a much better match for what you would expect if ASICBoost were in use, than if it was not. In detail:

The first scenario can be easily verified, and does not match reality. Likewise the third scenario does not (at least in my opinion) match reality; as noted above, many of the explanations presented are superficial at best, contradict each other, or simply fall apart on even a cursory analysis. Unfortunately that rules out assuming good faith — either people are lying about using ASICBoost, or just dissembling about why they’re not using it. Working out which of those is most likely requires coming to our own conclusion on whether ASICBoost makes sense.

I think Jimmy Song had some good posts on that topic. His first, on Bitmain’s ASICBoost claims finds some plausible examples of ASICBoost testing on testnet, however this was corrected in the comments as having been performed by Timo Hanke, rather than Bitmain. Having a look at other blocks’ version fields on testnet seems to indicate that there hasn’t been much other fiddling of version fields, so presumably whatever testing of ASICBoost was done by Bitmain, fiddling with the version field was not used; but that in turn implies that Bitmain must have been testing covert ASICBoost on testnet, assuming their claim to have tested it on testnet is true in the first place (they could quite reasonably have used a private testnet instead). Two later posts, on profitability and ASICBoost and Bitmain’s profitability in particular, go into more detail, mostly supporting Guy Corem’s analysis mentioned above. Perhaps interestingly, Jimmy Song also made a proposal to the bitcoin-dev shortly after Greg’s original post revealing ASICBoost and prior to these posts; that proposal would have endorsed use of ASICBoost on mainnet, making it cheaper and compatible with segwit, but would also have made use of ASICBoost readily apparent to both other miners and patent holders.

It seems to me there are three different ways to look at the maths here, and because this is an economics question, each of them give a different result:

So like I said, that’s three different answers in each of two scenarios: Guy’s low end assumption of 13.2% hashpower and a 15% advantage to ASICBoost gives figures of $29M/$2M/$11M; while Greg’s high end assumptions of 50% hashpower and 30% advantage give figures of $100M/$15M/$47M. The differences in assumptions there is obviously pretty important.

I don’t find the assumptions behind Greg’s maths realistic: in essence, it assumes that mining be so competitive that it is barely profitable even in the short term. However, if that were the case, then nobody would be able to invest in new mining hardware, because they would not recoup their investment. In addition, even if at some point mining were not profitable, increases in the price of bitcoin would change that, and the price of bitcoin has been increasing over recent months. Beyond that, it also assumes electricity prices do not vary between miners — if only the marginal miner is not profitable, it may be that some miners have lower costs and therefore are profitable; and indeed this is likely the case, because electricity prices vary over time due to both seasonal and economic factors. The method Greg uses does is useful for establishing an upper limit, however: the only way ASICBoost could offer more savings than Greg’s estimate would be if every block mined produced less revenue than it cost in electricity, and miners were making a loss on every block. (This doesn’t mean $100M is an upper limit however — that estimate was current in April, but the price of bitcoin has more than doubled since then, so the current upper bound via Greg’s maths would be about $236M per year)

A downside to Guy’s method from the point of view of outside analysis is that it requires more information: you need to know the efficiency of the miners being used and the cost of electricity, and any error in those estimates will be reflected in your final figure. In particular, the cost of electricity needs to be a “whole lifecycle” cost — if it costs 3c/kWh to supply electricity, but you also need to spend an additional 5c/kWh in cooling in order to keep your data-centre operating, then you need to use a figure of 8c/kWh to get useful results. This likely provides a good lower bound estimate however: using ASICBoost will save you energy, and if you forget to account for cooling or some other important factor, then your estimate will be too low; but that will still serve as a loose lower bound. This estimate also changes over time however; while it doesn’t depend on price, it does depend on deployed hashpower — since total hashrate has risen from around 3700 PH/s in April to around 6200 PH/s today, if Bitmain’s hashrate has risen proportionally, it has gone from 500 PH/s to 837 PH/s, and an ASICBoost advantage of 15% means power cost savings have gone from $2M to $3.3M per year; or if Bitmain has instead maintained control of 50% of hashrate at 30% advantage, the savings have gone from $15M to $25M per year.

The key difference between my method and both Greg’s and Guy’s is that they implicitly assume that consuming more electricity is viable, and costs simply increase proportionally; whereas my method assumes that this is not viable, and instead that sufficient mining hardware has been deployed that power consumption is already constrained by some other factor. This might be due to reaching the limit of what the power company can supply, or the rating of the wiring in the data centre, or it might be due to the cooling capacity, or fire risk, or some other factor. For an operation spanning multiple data centres this may be the case for some locations but not others — older data centres may be maxed out, while newer data centres are still being populated and may have excess capacity, for example. If setting up new data centres is not too difficult, it might also be true in the short term, but not true in the longer term — that is having each miner use more power due to disabling ASICBoost might require shutting some miners down initially, but they may be able to be shifted to other sites over the course of a few weeks or month, and restarted there, though this would require taking into account additional hosting costs beyond electricity and cooling. As such, I think this is a fairly reasonable way to produce an plausible estimate, and it’s the one I’ll be using. Note that it depends on the bitcoin price, so the estimates this method produces have also risen since April, going from $11M to $24M per annum (13.2% hash, 15% advantage) or from $47M to $103M (50% hash, 30% advantage).

The way ASICBoost works is by allowing you to save a few steps: normally when trying to generate a proof of work, you have to do essentially six steps:

  1. A = Expand( Chunk1 )
  2. B = Compress( A, 0 )
  3. C = Expand( Chunk2 )
  4. D = Compress( C, B )
  5. E = Expand( D )
  6. F = Compress( E )

The expected process is to do steps (1,2) once, then do steps (3,4,5,6) about four billion (or more) times, until you get a useful answer. You do this process in parallel across many different chips. ASICBoost changes this process by observing that step (3) is independent of steps (1,2) — so by finding a variety of Chunk1s — call them Chunk1-A, Chunk1-B, Chunk1-C and Chunk1-D that are each compatible with a common Chunk2. In that case, you do steps (1,2) four times for each different Chunk1, then do step (3) four billion (or more) times, and do steps (4,5,6) 16 billion (or more) times, to get four times the work, while saving 12 billion (or more) iterations of step (3). Depending on the number of Chunk1’s you set yourself up to find, and the relative weight of the Expand versus Compress steps, this comes to (n-1)/n / 2 / (1+c/e), where n is the number of different Chunk1’s you have. If you take the weight of Expand and Compress steps as about equal, it simplifies to 25%*(n-1)/n, and with n=4, this is 18.75%. As such, an ASICBoost advantage of about 20% seems reasonably practical to me. At 50% hash and 20% advantage, my estimates for ASICBoost’s value are $33M in April, and $72M today.

So as to the question of whether you’d use ASICBoost, I think the answer is a clear yes: the lower end estimate has risen from $2M to $3.3M per year, and since Bitmain have acknowledged that AntMiner’s support ASICBoost in hardware already, the only additional cost is finding collisions which may not be completely trivial, but is not difficult and is easily automated.

If the benefit is only in this range, however, this does not provide a plausible explanation for opposing segwit: having the Bitcoin industry come to a consensus about how to move forward would likely increase the bitcoin price substantially, definitely increasing Bitmain’s mining revenue — even a 2% increase in price would cover their additional costs. However, as above, I believe this is only a lower bound, and a more reasonable estimate is on the order of $11M-$47M as of April or $24M-$103M as of today. This is a much more serious range, and would require an 11%-25% increase in price to not be an outright loss; and a far more attractive proposition would be to find a compromise position that both allows the industry to move forward (increasing the price) and allows ASICBoost to remain operational (maintaining the cost savings / revenue boost).

 

It’s possible to take a different approach to analysing the cost-effectiveness of mining given how much you need to pay in electricity costs. If you have access to a lot of power at a flat rate, can deal with other hosting issues, can expand (or reduce) your mining infrastructure substantially, and have some degree of influence in how much hashpower other miners can deploy, then you can derive a formula for what proportion of hashpower is most profitable for you to control.

In particular, if your costs are determined by an electricity (and cooling, etc) price, E, in dollars per kWh and performance, r, in Joules per gigahash, then given your hashrate, h in terahash/second, your power usage in watts is (h*1e3*r), and you run this for 600 seconds on average between each block (h*r*6e5 Ws), which you divide by 3.6M to convert to kWh (h*r/6), then multiply by your electricity cost to get a dollar figure (h*r*E/6). Your revenue depends on the hashrate of the everyone else, which we’ll call g, and on average you receive (p*R*h/(h+g)) every 600 seconds where p is the price of Bitcoin in dollars and R is the reward (subsidy and fees) you receive from a block. Your profit is just the difference, namely h*(p*R/(h+g) – r*E/6). Assuming you’re able to manufacture and deploy hashrate relatively easily, at least in comparison to everyone else, you can optimise your profit by varying h while the other variables (bitcoin price p, block reward R, miner performance r, electricity cost E, and external hashpower g) remain constant (ie, set the derivative of that formula with respect to h to zero and simplify) which gives a result of 6gpR/Er = (g+h)^2.

This is solvable for h (square root both sides and subtract g), but if we assume Bitmain is clever and well funded enough to have already essentially optimised their profits, we can get a better sense of what this means. Since g+h is just the total bitcoin hashrate, if we call that t, and divide both sides, we get 6gpR/Ert = t, or g/t = (Ert)/(6pR), which tells us what proportion of hashrate the rest of the network can have (g/t) if Bitmain has optimised its profits, or, alternative we can work out h/t = 1-g/t = 1-(Ert)/(6pR) which tells us what proportion of hashrate Bitmain will have if it has optimised its profits.  Plugging in E=$0.03 per kWH, r=0.1 J/GH, t=6e6 TH/s, p=$2400/BTC, R=12.5 BTC gives a figure of 0.9 – so given the current state of the network, and Guy Corem’s cost estimate, Bitmain would optimise its day to day profits by controlling 90% of mining hashrate. I’m not convinced $0.03 is an entirely reasonable figure, though — my inclination is to suspect something like $0.08 per kWh is more reasonable; but even so, that only reduces Bitmain’s optimal control to around 73%.

Because of that incentive structure, if Bitmain’s current hashrate is lower than that amount, then lowering manufacturing costs for own-use miners by 15% (per Sam Cole’s estimates) and lowering ongoing costs by 15%-30% by using ASICBoost could have a compounding effect by making it easier to quickly expand. (It’s not clear to me that manufacturing a line of ASICBoost-only miners to reduce manufacturing costs by 15% necessarily makes sense. For one thing, this would come at a cost of not being able to mine with them while they are state of the art, then sell them on to customers once a more efficient model has been developed, which seems like it might be a good way to manage inventory. For another, it vastly increases the impact of ASICBoost not being available: rather than simply increasing electricity costs by 15%-30%, it would mean reducing output to 10%-25% of what it was, likely rendering the hardware immediately obsolete)

Using the same formula, it’s possible to work out a ratio of bitcoin price (p) to hashrate (t) that makes it suboptimal for a manufacturer to control a hashrate majority (at least just due to normal mining income): h/t < 0.5, 1-Ert/6pR < 0.5, so t > 3pR/Er. Plugging in p=2400, R=12.5, e=0.08, r=0.1, this gives a total hash rate of 11.25M TH/s, almost double the current hash rate. This hashrate target would obviously increase as the bitcoin price increases, halve if the block reward halves (if a fall in the inflation subsidy is not compensated by a corresponding increase in fee income eg), increase if the efficiency of mining hardware increases, and decrease if the cost of electricity increases. For a simpler formula, assuming the best hosting price is $0.08 per kWh, and while the Antminer S9’s efficiency at 0.1 J/GH is state of the art, and the block reward is 12.5 BTC, the global hashrate in TH/s should be at least around 5000 times the price (ie 3R/Er = 4787.5, near enough to 5000).

Note that this target also sets a limit on the range at which mining can be profitable: if it’s just barely better to allow other people to control >50% of miners when your cost of electricity is E, then for someone else whose cost of electricity is 2*E or more, optimal profit is when other people control 100% of hashrate, that is, you don’t mine at all. Thus if the best large scale hosting globally costs $0.08/kWh, then either mining is not profitable anywhere that hosting costs $0.16/kWh or more, or there’s strong centralisation pressure for a mining hardware manufacturer with access to the cheapest electrictiy to control more than 50% of hashrate. Likewise, if Bitmain really can do hosting at $0.03/kWh, then either they’re incentivised to try to control over 50% of hashpower, or mining is unprofitable at $0.06/kWh and above.

If Bitmain (or any mining ASIC manufacturer) is supplying the majority of new hashrate, they actually have a fairly straightforward way of achieving that goal: if they dedicate 50-70% of each batch of ASICs built for their own use, and sell the rest, with the retail price of the sold miners sufficient to cover the manufacturing cost of the entire batch, then cashflow will mostly take care of itself. At $1200 retail price and $500 manufacturing costs (per Jimmy Song’s numbers), that strategy would imply targeting control of up to about 58% of total hashpower. The above formula would imply that’s the profit-maximising target at the current total hashrate and price if your average hosting cost is about $0.13 per kWh. (Those figures obviously rely heavily on the accuracy of the estimated manufacturing costs of mining hardware; at $400 per unit and $1200 retail, that would be 67% of hashpower, and about $0.09 per kWh)

Strategies like the above are also why this analysis doesn’t apply to miners who buy their hardware rather from a vendor, rather than building their own: because every time they increase their own hash rate (h), the external hashrate (g) also increases as a direct result, it is not valid to assume that g is constant when optimising h, so the partial derivative and optimisation is in turn invalid, and the final result is not applicable.

 

Bitmain’s mining pool, AntPool, obviously doesn’t directly account for 58% or more of total hashpower; though currently they’re the pool with the most hashpower at about 20%. As I understand it, Bitmain is also known to control at least BTC.com and ConnectBTC which add another 7.6%. The other “Emergent Consensus” supporting pools (Bitcoin.com, BTC.top, ViaBTC) account for about 22% of hashpower, however, which brings the total to just under 50%, roughly the right ballpark — and an additional 8% or 9% could easily be pointed at other public pools like slush or f2pool. Whether the “emergent consensus” pools are aligned due to common ownership and contractual obligations or simply similar interests is debatable, though. ViaBTC is funded by Bitmain, and Canoe was built and sold by Bitmain, which means strong contractual ties might exist, however  Jihan Wu, Bitmain’s co-founder, has disclaimed equity ties to BTC.top. Bitcoin.com is owned by Roger Ver, but I haven’t come across anything implying a business relationship between Bitmain and Bitcoin.com beyond supplier and customer. However John McAffee’s apparently forthcoming MGT mining pool is both partnered with Bitmain and advised by Roger Ver, so the existence of tighter ties may be plausible.

It seems likely to me that Bitmain is actually behaving more altruistically than is economically rational according to the analysis above: while it seems likely to me that Bitcoin.com, BTC.top, ViaBTC and Canoe have strong ties to Bitmain and that Bitmain likely has a high level of influence — whether due to contracts, business relationships or simply due to the loyalty and friendship — this nevertheless implies less control over the hashpower than direct ownership and management, and likely less profit. This could be due to a number of factors: perhaps Bitmain really is already sufficiently profitable from mining that they’re focusing on building their business in other ways; perhaps they feel the risks of centralised mining power are too high (and would ultimately be a risk to their long term profits) and are doing their best to ensure that mining power is decentralised while still trying to maximise their return to their investors; perhaps the rate of expansion implied by this analysis requires more investment than they can cover from their cashflow, and additional hashpower is funded by new investors who are simply assigned ownership of a new mining pool, which may helps Bitmain’s investors assure themselves they aren’t being duped by a pyramid scheme and gives more of an appearance of decentralisation.

It seems to me therefore there could be a variety of ways in which Bitmain may have influence over a majority of hashpower:

 

So, conclusions:

July 13, 2017 09:16 AM

July 12, 2017

Adrian SuttonMoolah Diaries – Maintaining Invariants with Vuex Mutations

Previously on the Moolah Diaries I had plans to recalculate the current balance at each transaction as part of any Vuex action that changed a transaction. Tracking balances is a good example of an invariant – at the completion of any atomic change, the balance for a transaction should be the initial balance plus the sum of all transaction amounts prior to the current transaction.

The other invariant around transactions is that they should be sorted in reverse chronological order. To make the order completely deterministic, transactions with the same dates are then sorted by ID. This isn’t strictly necessary, but it avoids having transactions jump around unexpectedly.

My original plan was to ensure both of these invariants were preserved as part of each Vuex action, but actions aren’t the atomic operations in Vuex – mutations are. So we should update balances and adjust sort order as part of any mutation that affects transactions. Since only mutations can change state in Vuex, we can be sure that if our mutations preserve the invariants then they will always hold true. With actions, there’s always the risk that something would use mutations directly and break the invariants.

So lesson number 1 – in Vuex, it should be mutations that are responsible for maintaining invariants. That’s probably not news to anyone who’s used Vuex for long.

We could take that a step further and verify that the invariants always hold true using a vuex plugin:

const invariantPlugin = store => {
if (process.env.NODE_ENV !== 'production') {
store.subscribe((mutation, state) => {
// Verify invariants
});
}
};

However, the way vuex recommends testing actions and mutations means registered plugins don’t get loaded. Tests that cover views that use the store will most likely stub out the actions since they’ll usually make HTTP requests.  So mostly the invariant check will only run during manual testing. That may still be worth it, but I’m not entirely sure yet…

 

July 12, 2017 06:52 AM

July 10, 2017

Tim KentDown the OVA compatibility rabbit hole

I recently volunteered to create a B2R CTF for SecTalks_BNE. It was fairly simple to create the content within the machine, however I came across a few hurdles when trying to make the machine as portable as possible. I wanted it to be easily usable on VirtualBox as well as VMware Fusion, Player and Workstation.

Before embarking on this project I had foolishly assumed I could just create the VM in VirtualBox and then "Export Appliance..." to create a portable OVA. If only it were that simple!

The OVA files that were created by VirtualBox worked fine by other VirtualBox users, but VMware users were getting various levels of success; Fusion wouldn't play nice at all.

I've created this post so that I remember what to do again down the track, and as a side bonus hopefully someone else will benefit or learn from it!

Let me explain some acronyms first

An OVA file is an Open Virtualisation Appliance. It's essentially a tarball containing an OVF, one or more disk images (usually VMDK files) and a manifest (checksum) file.

The OVF (Open Virtualisation Format) specifies the configuration of the virtual machine. The disk images contain data held by the virtual drives.

Gathering test data

To get some VMware test data I dragged my old HP N54L out of the cupboard and installed ESXi 6.5 on it. The disk performance was horrendously slow until I disabled the problematic AHCI driver as per this blog.

After creating a few OVA files from ESXi, my testing concluded that VirtualBox happily accepted a VMware OVA but VMware had a hard time working with a VirtualBox OVA.

One solution would be to do all my development on ESXi, but I quite like using VirtualBox on my laptop!

My VirtualBox solution

I decided to keep things simple and use ESXi to generate the initial OVA. I chose to target VMware 4 to keep it compatible with pretty much everything. After this step ESXi was no longer required.

I then unpacked said OVA, prepared the replacement disk image with VirtualBox and rolled my own OVA using a few commands.

The initial OVA contained the following:
$ tar xvf covfefe.ova
covfefe.ovf
covfefe.mf
disk-0.vmdk

To prepare the replacement disk-0.vmdk file, I ran through the steps in my earlier blog post and converted from VDI to VMDK with clonemedium (also mentioned in the same post).

After replacing the VMDK file, I edited the size entry in the OVF to reflect the new file:
<File ovf:href="disk-0.vmdk" ovf:id="file1" ovf:size="464093696"/>

Once I finished editing the OVF I had to create the correct checksums to use in the manifest file:
$ shasum covfefe.ovf disk-0.vmdk
249eef04df64f45a185e809e18fb285cadfcd6f0  covfefe.ovf
ae1718beb7d5eb7dfb5158718b0eceda812512a2  disk-0.vmdk

After the changes my manifest file looked like this:
$ cat covfefe.mf 
SHA1 (covfefe.ovf)= 249eef04df64f45a185e809e18fb285cadfcd6f0
SHA1 (disk-0.vmdk)= ae1718beb7d5eb7dfb5158718b0eceda812512a2

I then reassembled the OVA file:
$ tar cf covfefe.ova covfefe.ovf covfefe.mf disk-0.vmdk

Just as a test I also did the assembly using OVF Tool as it did some extra checks while assembling:
$ /Applications/VMware\ OVF\ Tool/ovftool covfefe.ovf covfefe.ova

The OVA has worked flawlessly on everything I've tested it on so far which is VirtualBox 5.1.22, VMware ESXi 6.5, Fusion 8.5.8 and Player 6.0.1.

July 10, 2017 12:42 PM

Anthony TownsBitcoin: ASICBoost and segwit2x – Background

I’ve been trying to make heads or tails of what the heck is going on in Bitcoin for a while now. I’m not sure I’ve actually made that much progress, but I’ve at least got some thoughts that seem coherent now.

First, this post is background for people playing along at home who aren’t familiar with the issues or jargon: Bitcoin is a currency based on an electronic ledger that essentially tracks how much Bitcoin exists, and how someone can be authorised to transfer it to someone else; that ledger is currently about 100GB in size, growing at a rate of about a gigabyte a week. The ledger is updated by miners, who compete by doing otherwise pointless work running cryptographic hashes (and in so doing obtain a “proof of work”), and in return receive a reward (denominated in bitcoin) made up from fees by people transacting and an inflation subsidy. Different miners are competing in an essentially zero-sum game, because fees and inflation are essentially a fixed amount that is (roughly) divided up amongst miners according to how much work they do — so while you get more reward for doing more work, it comes at a cost of other miners receiving less reward.

Because the ledger only grows by (about) a gigabyte each week (or a megabyte per block, which is roughly every ten minutes), there is a limit on how many transactions can be included each week (ie, supply is limited), which both increases fees and limits adoption — so for quite a while now, people in the bitcoin ecosystem with a focus on growth have wanted to work out ways to increase the transaction rate. Initial proposals in mid 2015 suggested allowing miners to regularly determine the limit with no official upper bound (nominally “BIP100“, though never actually formally submitted as a proposal), or to increase by a factor of eight within six months, then double every two years after that, until reaching almost 200 times the current size by 2036 (BIP101), or to increase at a rate of about 17% per annum (suggested on the mailing list, but never formally proposed BIP103). These proposals had two major risks: locking in a lot of growth that may turn out to be unnecessary or actively harmful, and requiring what is called a “hard fork”, which would render the existing bitcoin software unable to track the ledger after the change took affect with the possible outcome that two ledgers would coexist and would in turn cause a range of problems. To reduce the former risk, a minimal compromise proposal was made to “kick the can down the road” and just double the ledger growth rate, then figure out a more permanent solution down the road (BIP102) (or to double it three times — to 2MB, 4MB then 8MB — over four years, per Adam Back). A few months later, some of the devs figured out a way to more or less achieve this that also doesn’t require a hard fork, and comes with a host of other benefits, and proposed an update called “segregated witness” at the December 2015 Scaling Bitcoin conference.

And shortly after that things went completely off the rails, and have stayed that way since. Ultimately there seem to be two camps: one group is happy to deploy segregated witness, and is eager to make further improvements to Bitcoin based on that (this is my take on events); while the other group does not, perhaps due to some combination of being opposed to the segregated witness changes directly, wanting a more direct upgrade immediately, being afraid deploying segregated witness will block other changes, or wanting to take control of the bitcoin codebase/roadmap from the current developers (take this with a grain of salt: these aren’t opinions I share or even find particularly reasonable, so I can’t do them justice when describing them; cf ViaBTC’s post to get that side of the argument made directly, eg)

Most recently, and presumably on the basis that the opposed group are mostly worried that deploying segregated witness will prevent or significantly delay a more direct increase in capacity, a bitcoin venture capitalist, Barry Silbert, organised an agreement amongst a number of companies including many miners, to both activate segregated witness within the next month, and to do a hard fork capacity increase by the end of the year. This is the “segwit2x” project; named because it takes segregated witness, (“segwit”) and then additionally doubles its capacity increase (“2x”). This agreement is not supported by any of the existing dev team, and is being developed by Jeff Garzik (who was behind BIP100 and BIP102 mentioned above) in a forked codebase renamed “btc1“, so if successful, this may also satisfy members of the opposed group motivated by a desire to take control of the bitcoin codebase and roadmap, despite that not being an explicit part of the agreement itself.

To me, the arguments presented for opposing segwit don’t really seem plausible. As far as future development goes, a roadmap was put out in December 2015 and endorsed by many developers that explicitly included a hard fork for increased capacity (“moderate block size increase proposals (such as 2/4/8 …)”), among many other things, so the risk of no further progress happening seems contrary to the facts to me. The core bitcoin devs are extremely capable in my estimation, so replacing them seems a bad idea from the start, but even more than that, they take a notably hands off approach to dictating where Bitcoin will go in future — so, to my mind, it seems like a more sensible thing to try would be working with them to advance the bitcoin ecosystem in whatever direction you want, rather than to try to replace them outright. In that context, it seems particularly notable to me that in the eighteen months between the segregated witness proposal and the segwit2x agreement, there hasn’t been any serious attempt to propose a hard fork capacity increase that meets the core dev’s quality standards; for instance there has never been any code for BIP100, and of the various hard forking codebases that have arisen by advocates of the hard fork approach — Bitcoin XT, Bitcoin Classic, Bitcoin Unlimited, btc1, and Bitcoin ABC — none have been developed in a way that’s suitable for the changes to be reviewed and merged into core via a pull request in the normal fashion. Further, since one of the main criticisms of a hard fork is that deployment costs are higher when it is done in a short time frame (weeks or a few months versus a year or longer), that lack of engagement over the past 18 months followed by a desperate rush now seems particularly poor to me.

A different explanation for the opposition to segwit became public in April, however. ASICBoost is a patent-pending optimisation to the way Bitcoin miners do the work that entitles them to extend the ledger (for which they receive the rewards described earlier), and while there are a few ways of making use of ASICBoost, perhaps the most effective way turns out to be incompatible with segwit. There are three main alternatives to the covert, segwit-incompatible approach, all of which have serious downsides. The first, overt ASICBoost via modifying the block version reveals that you’re using ASICBoost, which would either (a) encourage other miners to also use the optimisation reducing your profits, (b) give the patent holder cause to charge you royalties or cause other problems (assuming the patent is eventually granted and deemed valid), or (c) encourage the bitcoin community at large to change the ecosystem rules so that the optimisation no longer works. The second, mining empty blocks via ASICBoost means you don’t gain any fee income, reducing your revenue and hence profit. And the third, rolling the extranonce to find a collision rather than combining partial transaction trees increases the preparation work by a factor of ten or so, which is probably enough to outweigh the savings from the optimisation in the first place.

If ASICBoost were being used by a significant number of miners, and segregated witness prevents its continued use in practice, then we suddenly have a very plausible explanation for much of the apparent madness: the loss of the optimisation could significantly increase some miners’ costs or reduce their revenue, reducing profit either way (a high end estimate of $100,000,000 per year was given in the original explanation), which would justify significant investment in blocking that change. Further, an honest explanation of the problem would not be feasible, because this would be just as bad as doing the optimisation overtly — it would increase competition, alert the potential patent owners, and might cause the optimisation to be deliberately disabled — all of which would also negatively affect profits. As a result, there would be substantial opposition to segwit, but the reasons presented in public for this opposition would be false, and it would not be surprising if the people presenting these reasons only give half-hearted effort into providing evidence — their purpose is simply to prevent or at least delay segwit, rather than to actually inform or build a new consensus. To this line of thinking the emphasis on lack of communication from core devs or the desire for a hard fork block size increase aren’t the actual goal, so the lack of effort being put into resolving them over the past 18 months from the people complaining about them is no longer surprising.

With that background, I think there are two important questions remaining:

  1. Is it plausible that preventing ASICBoost would actually cost people millions in profit, or is that just an intriguing hypothetical that doesn’t turn out to have much to do with reality?
  2. If preserving ASICBoost is a plausible motivation, what will happen with segwit2x, given that by enabling segregated witness, it does nothing to preserve ASICBoost?

Well, stay tuned…

July 10, 2017 03:20 AM

Tim KentPrepping a Linux VM for OVA export

These are the steps I recommend to prepare a Linux VM for OVA export. It should keep the size down to a minimum and prevent headaches and confusion down the track!

I'm using VirtualBox but the info applies to VMware. You'll just have to read the VMware documentation for the compacting section.

I am running these commands from a Debian Stretch live CD inside the guest, and have mounted the destination filesystem (/dev/sda1) as /mnt:
$ sudo mount /dev/sda1 /mnt

Disable systemd from renaming network interfaces

If you leave this enabled, you'll have different network interface names for VirtualBox and VMware so your interface definitions won't work in both!

I disable this by adding the kernel parameter "net.ifnames=0", you can do this within /mnt/etc/default/grub:
GRUB_CMDLINE_LINUX="net.ifnames=0"

Then run update-grub from within a chroot:
$ sudo mount --bind /dev /mnt/dev
$ sudo mount --bind /proc /mnt/proc
$ sudo mount --bind /sys /mnt/sys
$ sudo chroot /mnt
# update-grub
# exit
$ sudo umount /mnt/dev /mnt/proc /mnt/sys

You'll now want to adjust /etc/network/interfaces (or equivalent) accordingly to reflect eth0 instead of enp0s17 or whatever.

Sanitise the log directory

Nuke the contents but leave files in place:
$ sudo find /mnt/var/log -type f -exec sh -c 'cat /dev/null > {}' \;

Discard unallocated blocks

Unmount the filesystem then discard unallocated blocks:
$ sudo umount /mnt
$ sudo e2fsck -E discard /dev/sda1

Compact the disk image

This is done from the host, not the guest.

If you're using a VDI file, you can use modifymedium --compact:
https://www.virtualbox.org/manual/ch08.html#vboxmanage-modifyvdi

If you're using a VMDK file, you can use clonemedium:
https://www.virtualbox.org/manual/ch08.html#vboxmanage-clonevdi

July 10, 2017 03:07 AM

July 06, 2017

Adrian SuttonMoolah Diaries – Multi-tenant Support for Testing

Experience at LMAX has very clearly demonstrated the benefits of good test isolation, so one of the first things I added to Moolah was multi-tenant support. Each account is a completely isolated little world which is perfect for testing.

Given that I’ve completely punted on storing passwords by delegating authentication to Google (and potentially other places in the future since it’s using bell to handle third party authentication), there’s actually no user creation step at all which makes it even easier. As a result it’s not just acceptance tests that are benefiting from the isolation but also the database tests which no longer need to delete everything in the database to give themselves a clear workspace.

It’s all quite civilised really.

July 06, 2017 09:52 PM

Adrian SuttonMoolah Diaries – Vuex for Front End State

Most of the work I’ve done on Moolah so far has been on the server side – primarily fairly boring setup work and understanding the best way to use any number of libraries that were new to me sensibly. The most interesting work however has been on the front-end. I’ve been using Vue.js and Vuetify for the UI after Vue’s success in the day job. The Moolah UI has much more data-interdependence between components than what we’ve needed at work though so I’ve introduced Vuex to manage the state in a centralised and more managed way.

I really quite like the flow – Vue components still own any state related directly to the UI like whether a dialog is shown or hidden etc but all the business model is stored in and managed by the Vuex store. The Vue components dispatch actions which perform computation, make requests to the backend or whatever is required then commit any changes to the store (via mutations). The usual Vue data-binding then kicks in to update the UI to reflect those changes.

The big advantage of this is that it naturally pulls business logic out of .vue files, preventing them getting to big. Without Vuex that basically depends on having the discipline to notice when a .vue file is doing too much and then untangling and splitting out the business logic.  Vuex provides a much clearer and more consistent way to delineate business logic from view code because you can’t modify state directly from the Vue component and it then becomes natural to split out an action.

Vuex’s module support also makes it easy to avoid your Vuex store from becoming the big ball of mud that does everything.

However, I’m still searching for a good, efficient way to calculate and update the current balance for each transaction. The actual calculation is simple enough – the balance for any transaction is the sum of the amount of every transaction before it in the account. Simplistically we could just start from the first transaction and iterate through calculating all the balances in a single O(n) pass. However, recalculating the balance for every transaction on each change is incredibly wasteful and is a big part of why the original version of Moolah takes so long to get started – it’s calculating all those balances. Each transaction balance actually only depends on two things, the transaction amount and the balance of the previous transaction. Since most new or changed transactions are at or near the very end of the transaction list, we should be able to avoid recalculating most of the balances.

I don’t think Vue/Vuex’s lazy evaluation will be able to avoid doing a lot of extra recalculation, not least of all because the only way to represent this would be a transactionsWithBalances computed view and it would output the entire list of transactions so would recalculate every balance on every change.

However, it’s reasonably straight forward to build the lazy evaluation manually, but where does that sit in the Vuex system? I’m guessing pre-calculated balance is just a property of every transaction in the state and actions take responsibility for updating any balances they might have affected.

I’m leaning towards having a dedicated ‘updateBalances’ action that can be triggered at the end of any action that changes transactions and is given the first transaction that requires it’s balance recalculated. Since every transaction after that depends on the balance of the one before they’ll also need updating.

I think that works and am now reminded about how useful it is to write a diary like this as a way to think through issues like this.

July 06, 2017 08:38 PM

July 04, 2017

Adrian SuttonModernising Our JavaScript – Vue.js To The Rescue

I’ve previously written about how Angular 2 didn’t work out for us, our second attempt was to use Vue.js which has much far more successful. The biggest difference with Angular is that it’s a much smaller library. It has far less functionality included in the box but a wide range of plugins to flesh out functionality as needed. That avoided the two big issues we had with Angular 2:

We’ve been so happy with how Vue.js fits in for us, we’re now in the process of replacing the things we had built in Angular 2 with Vue.js versions.

We set out looking for a more modern UI framework primarily because we wanted the data binding functionality they provide. As expected, that’s been a very big benefit for any parts of the UI that are even slightly more than simple CRUD. We were using mustache for our templates and the extra power and flexibility of Vue’s templating has been a big advantage. There is a risk of making the templates too complex and hard to understand, but that’s mitigated by how easy it is to break out separate components that are narrowly focused.

In fact, the component model has turned out to be the single biggest advantage of Vue over jquery. We did have some patterns built up around jquery that enabled component like behaviour but they were very primitive compared to what Vue provides. We’ve got a growing library of reusable components already that all fit in nicely with the existing look and feel of the UI.

The benefit of components is so great that I’d use Vue even for very straight-forward UIs where jquery by itself could handle it simply. Vue adds almost no overhead in terms of complexity and makes the delineation of responsibilities between components very clear which leads to often unexpected re-usability. With mixins it’s also possible to reuse cross-cutting concerns easily.

All those components wind up being built in .vue files which combine HTML, JavaScript and styles for the component into one file. I was quite sceptical of this at first but Vue provides a good justification for the decision and in practice it works really well as long as you are a bit disciplined at splitting things out into separate files if they become at all complex. Typically I try to have the code in the .vue file entirely focused on managing the component state and split out the details of interacting with anything external (e.g. calling server APIs and parsing responses) into helper files.

Ultimately, it’s the component system that is really bringing us the most value which is a bit of a surprise given we had expected data-binding to be the real powerhouse. And data-binding is great, but it’s got nothing on the advantages of a clear component system that’s at just the right level of opinionated-ness for our system. We’re not only building UIs faster, but the UIs we build are better because any time we spend polishing a component applies everywhere it’s used.

I’m really struggling to think of a case where I wouldn’t use Vue now, and if I found one it would likely only be because one of the other similar options (e.g. Angular 2) was a better fit for that case.

July 04, 2017 03:44 AM

July 02, 2017

Adrian SuttonMoolah Diaries – Principles

It’s always important to have some basic principles in mind before you start writing code for a project. That way you have something to guide your technology choices, development practices and general approach to the code. They’re not immovable – many of them won’t survive the first encounter with actual code – but identifying your principles helps to make a code base much more coherent.

Naturally for the Moolah rebuild I just jumped in and started writing code without much thought, let alone guiding principles but a few have emerged either as lessons from the first version, personal philosophy or early experiences.

Make the Server Smart

In the first version of Moolah, the server was just a dumb conduit to the database – in fact for quite some time it didn’t even parse the JSON objects, just stuffed them into the database and pulled them back out again. That missed a lot of opportunities for optimisation and keeping the client light and fast.

Ultimately this isn’t so much about client vs server, it’s more about backend vs frontend. In a browser the data storage options are limited (though much better now than 5 years ago) but the server has a proper database so can filter and work with data much faster. Sure I could re-implement fast indexing in the browser but why bother?

Provide Well-Defined, Client-Agnostic APIs

I want to be able to work on the client and server independently, potentially replacing one outright without changing the other. That way if one of my shiny tool choices doesn’t pan out well it’s a lot easier to ditch it, even if that means completely rebuilding the client or server. It also means I can play around with native applications should I want to in the future.

Deploy Early, Deploy Often and Deploy Automatically

This is basically the only sane way to work, but I’m often too lazy to do it properly with side projects. Moolah still has a few missing elements but has come far enough that I’m likely to stick with this and do it properly.

Test at Multiple Levels

Side projects that involve playing with shiny new toys often wind up lacking any form of tests and that comes back to hurt pretty quickly. I’ve made an effort to have a reasonable level of tests in place right from the start. There’s been a big cost to that as I wrestle with the various testing frameworks and understand the best ways to approach testing Moolah, but that cost will be paid back over time and is much lower than trying to wrestle with those tools and a big, test-unfriendly code base.

I’m also fairly inclined to only believe something is actually working when I have quite high level tests for it, including going right through to the real database. Unit tests are a design tool, integration tests are where you flush out most of the bugs and I’ve been seeing that even this early.

Fail Fast

If a tool, library or approach isn’t going to work I want to find out quickly and ditch it quickly. I’m trying lots of new things so have had to be pretty ruthless at ripping them back out, not just if they don’t work, but if they introduce too many gotchas or hoops to jump through.

Have Fun

There’s a degree to which I need the Moolah rebuild to reach feature parity with the old system fairly quickly so I can switch over, but this is a side project so it’s mostly about having fun. I’ve let myself get distracted with building a fancy page for before you’re logged in even though it’s way more than I’ll likely ever need.

I’m not sure this is a particularly meaningful list in general but for this particular project at this particular time they seem to be serving me well.

July 02, 2017 04:20 AM

June 19, 2017

Adrian SuttonModernising Our JavaScript – Why Angular 2 Didn’t Work

At LMAX we value simplicity very highly and since most of the UI we need for the exchange is fairly straight forward settings and reports we’ve historically kept away from JavaScript frameworks. Instead we’ve stuck with jQuery and bootstrap along with some very simple common utilities and patterns we’ve built ourselves. Mostly this has worked very well for us.

Sometimes though we have more complex UIs where things dynamically change or state is more complex. In those cases things start to breakdown and get very messy. The simplicity of our libraries winds up causing complexity in our code instead of avoiding it. We needed something better.

Some side projects had used Angular and a few people were familiar with it so we started out trialling Angular 2.0. While it was much better for those complex cases the framework itself introduced so much complexity and cost it was unpleasant to work with.  Predominately we had two main issues:

  1. Build times were too slow
  2. It wasn’t well suited for dropping an Angular 2 component into an existing system rather than having everything live in Angular 2 world

Build Times

This was the most surprising problem – Angular 2 build times were painfully slow. We found we could build all of the java parts of the exchange before npm could even finish installing the dependencies for an Angular 2 project – even with all the dependencies in a local cache and using npm 5’s –offline option. We use buck for our build system and it does an excellent job of only building what needs to be changed and caching results so most of the time we could avoid the long npm install step, but it still needs to run often enough that it was a significant drain on the team’s productivity.

We did evaluate yarn and pnpm but neither were workable in our particular situation. They were both faster at installing dependencies but still far too slow.

The lingering question here is whether the npm install was so slow because of the sheer number of dependencies or because something about those dependencies was slow. Anecdotally it seemed like rxjs took forever to install but other issues led us away from angular before we fully understood this.

Even when the npm install could be avoided, the actual compile step was still slow enough to be a drain on the team. The projects we were using angular on were quite new with a fairly small amount of code. Running through the development server was fast, but a production mode build was slow.

Existing System Integration

The initial projects we used angular 2 on were completely from scratch so could do everything the angular 2 way. On those projects productivity was excellent and angular 2 was generally a joy to use. When we tried to build onto our existing systems using angular 2 things were much less pleasant.

Technically it was possible to build a single component on a page using angular 2 with other parts of the page using our older approach, but doing so felt fairly unnatural. The angular 2 way is significantly different to how we had been working and since angular 2 provides a full-suite of functionality it often felt like we were working against the framework rather than with it. Re-using our existing code within an angular 2 component felt wrong so we were being pushed towards duplicating code that worked perfectly well and we were happy with just to make it fit “the angular 2 way”.

If we intended to rewrite all our existing code using angular 2 that would be fine, but we’re not doing that. We have a heap of functionality that’s already built, working great and will be unlikely to need changes for quite some time. It would be a huge waste of time for us to go back and rewrite everything just to use the shiny new tool.

Angular 2 is Still Great

None of this means that angular 2 has irretrievable faults, it’s actually a pretty great tool to develop with. It just happens to shine most if you’re all-in with angular 2 and that’s never going to be our situation. I strongly suspect that even the build time issues would disappear if we could approach the build differently, but changing large parts of our build system and the development practices that work with it just doesn’t make sense when we have other options.

I can’t see any reason why a project built with angular 2 would need or want to migrate away. Nor would I rule out angular 2 for a new project. It’s a pretty great library, provides a ton of functionality that you can just run with and has excellent tooling. Just work out how your build works and if it’s going to be too slow early on.

For us though, Angular 2 didn’t turn out to be the wonderful new world we hoped it would be.

June 19, 2017 05:12 AM

June 14, 2017

Adrian SuttonUnit Testing JavaScript Promises with Synchronous Tests

With Promise/A+ spreading through the world of JavaScript at a rapid pace, there’s one little detail that makes them very hard to unit test: any chained actions (via .then()) are only called when the execution stack contains only platform code. That’s a good thing in the real world but it makes unit testing much more complex because resolving a promise isn’t enough – you also need to empty the execution stack.

The first, and best, way to address these challenges is to take advantage of your test runner’s asynchronous test support. Mocha for example allows you to return a promise from your test and waits for it to be either resolved to indicate success or rejected to indicate failure. For example:

it('should pass asynchronously', function() {
    return new Promise((resolve, reject) => {
        setTimeout(resolve, 100);
    })
});

This works well when you’re testing code that returns the promise to you so you can chain any assertions you need and then return the final promise to the test runner. However, there are often cases where promises are used internally to a component which this approach can’t solve. For example, a UI component that periodically makes requests to the server to update the data it displays. 

Sinon.js makes it easy to stub out the HTTP request using it’s fake server and the periodic updates using a fake clock, but if promises are used sinon’s clock.tick() isn’t enough to trigger chained actions. They’ll only execute after your test method returns and since there’s no reason, and often no way, for the UI component to pass a promise for it’s updates out of the component we can’t just depend on the test runner. That’s where promise-mock comes in. It replaces the normal Promise implementation with one that allows your unit test to trigger callbacks at any point.

Let’s avoid all the clock and HTTP stubbing by testing this very simple example of code using a Promise internally:

let value = 0;
module.exports = {
    setValueViaImmediatePromise: function (newValue) {
        return new Promise((resolve, reject) => resolve(newValue))
                .then(result => value = result);
    },
    getValue: function () {
        return value;
    }
};

Our test is then:

const asyncThing = require('./asyncThing');
const PromiseMock = require('promise-mock');
const expect = require('expect.js');
describe.only('with promise-mock', function() {
    beforeEach(function() {
        PromiseMock.install();
    });
    afterEach(function() {
        PromiseMock.uninstall();
    });
    it('should set value asynchronously and keep internals to itself', function() {
        asyncThing.setValueViaImmediatePromise(3);
        Promise.runAll();
        expect(asyncThing.getValue()).to.be(3);
    });
});

We have a beforeEach and afterEach to install and uninstall the mocked promise, then when we want the promise callbacks to execute in our test, we simply call Promise.runAll().  In most cases, promise-mock combined with sinon’s fake HTTP server and stub clock is enough to let us write easy-to-follow, synchronous tests that cover asynchronous behaviour.

Keeping our tests synchronous isn’t just about making them easy to read though – it also means we’re in control of how asynchronous callbacks interleave. So we can write tests to check what happens if action A finishes before action B and tests for what happens if it’s the other way around. Lots and lots of bugs hide in those areas.

PromiseMock.install() Not Working

All that sounds great, but I spent a long time trying to work out why PromiseMock.install() didn’t ever seem to change the Promise implementation. I could see that window.Promise === PromiseMock was true, but without the window prefix I was still getting the original promise implementation (Promise !== PromiseMock).

It turns out, that’s because we were using babel’s transform-runtime plugin which was very helpfully rewriting references to Promise to use babel’s polyfill version without the polyfill needing to pollute the global namespace. The transform-runtime plugin has an option to disable this:

['transform-runtime', {polyfill: false}]

With that promise-mock worked as expected.

June 14, 2017 11:36 PM

June 09, 2017

Sarah Smith - Game BlogWorking Around Xcode Server CI Manual profile Issues

There's a few problems and issues to deal with when getting Xcode server to work and they've been mostly dealt with well in other blog posts.


The dreaded error you get with a manually provisioned profile is harder to get rid of:

Provisioning profile "MyApp Development iOS" doesn't include signing certificate "iPhone Developer: OS X Server (VQJ479QQTU)"

It occurs because when the Xcode server runs in the context of the OS X Server it has its own keychain called "Portal" and in that keychain it creates its own new certificates, which are accessible by the server process.

First get your CI server up and running and create a bot as described in the Apple macOS Server documentation for CI services, or a recent blog post.

If you've got a manually provisioned profile you'll get the error above.  Make sure that on your local development machine - e.g. your laptop - open Xcode and use the credentials export feature - it will ask for a password which you'll need to carefully record.



This will create an archive of the credentials including your certificates, keys, profiles and so on.  Copy that onto your macOS Server machine (e.g. using scp or a USB key or whatever) and then as the xcodeserver user that you created for the purpose open Xcode and import that archive, supplying the password created above.

Now open the Keychain Access program - I find its easiest to do this by invoking Spotlight with <cmd>-<space> and then typing "Key...".  Here you need to do the following




You should now be able to run an integration and Xcode server will find and use the correct certificates to support your manually assigned profiles.

June 09, 2017 01:19 AM

June 08, 2017

Ben FowlerMy life for the next six months

June 08, 2017 01:18 PM

June 04, 2017

Ben MartinSix is the magic number

I have talked about controlling robot arms with 4 or 5 motors and the maths involved in turning a desired x,y,z target into servo angles. Things get a little too interesting with 6 motors as you end up with a great deal of solutions to a positioning problem and need to work out a 'best' choice.

<iframe allowfullscreen="" frameborder="0" height="315" src="https://www.youtube.com/embed/qfx0TrbOsok" width="560"></iframe>
So I finally got MoveIt! to work to control a six motor arm using ROS. I now also know that using MoveIt on lower order arms isn't going to give you much love. Six is the magic number (plus claw motor) to get things working and patience is your best friend in getting the configuration and software setup going.

This was great as MoveIt was the last corner of the ROS stack that I hadn't managed to get to work for me. The great part is that the knowledge I gained playing with MoveIt will work on larger more accurate and expensive robot arms.

June 04, 2017 07:02 AM

April 26, 2017

Tim KentIssue where KVM guest freezes just before installation of CentOS 7

I've been playing around with KVM on CentOS 7 in preparation for the RHCE exam. I was experiencing an issue where the guest virtual machine would freeze just before attempting an install (again, CentOS 7 as the guest).

The testing machine is quite old (has an Intel Core 2 6400 CPU) but it hasn't shown any other symptoms of hardware issues.

The logs didn't appear to show anything of interest other than some debugging information which is apparently normal:
[20389.379023] kvm [19537]: vcpu0 unhandled rdmsr: 0x60d
[20389.379034] kvm [19537]: vcpu0 unhandled rdmsr: 0x3f8
[20389.379039] kvm [19537]: vcpu0 unhandled rdmsr: 0x3f9
[20389.379043] kvm [19537]: vcpu0 unhandled rdmsr: 0x3fa
[20389.379048] kvm [19537]: vcpu0 unhandled rdmsr: 0x630
[20389.379053] kvm [19537]: vcpu0 unhandled rdmsr: 0x631
[20389.379057] kvm [19537]: vcpu0 unhandled rdmsr: 0x632

Anyway, I was able to work around the issue by feeding the "--cpu host" option to virt-install, or by ticking "Copy host CPU configuration" under the CPUs tab of the VM configuration.

Hope this helps save someone else some time!

April 26, 2017 06:53 AM

Tim KentPXE boot Debian using RouterOS as PXE server

I would typically use a Linux server for the purposes of PXE booting, but this is so straightforward it's a very attractive option. I'm using a MikroTik RB2011 (RouterOS v6.34.6) successfully.

This example assumes your router's LAN IP is 172.16.8.1 and the local subnet is 172.16.8.0/24.

First of all, download the netboot archive to a Linux machine (I'm using a Raspberry Pi here):
tim@raspberrypi /tmp $ wget http://ftp.au.debian.org/debian/dists/jessie/main/installer-amd64/current/images/netboot/netboot.tar.gz
tim@raspberrypi /tmp $ wget http://ftp.au.debian.org/debian/dists/jessie/main/installer-amd64/current/images/SHA256SUMS

Check that your archive matches the checksum file:
tim@raspberrypi /tmp $ grep `sha256sum netboot.tar.gz` SHA256SUMS
SHA256SUMS:460e2ed7db2d98edb09e5413ad72b71e3132a9628af01d793aaca90e7b317d46  ./netboot/netboot.tar.gz

Extract the archive to a tftp directory:
tim@raspberrypi /tmp $ mkdir tftp && tar xf netboot.tar.gz -C tftp

Copy tftp folder to the MikroTik:
tim@raspberrypi /tmp $ scp -r tftp admin-tim@172.16.8.1:

On the MikroTik, configure TFTP on MikroTik with a base directory of /tftp (omitting req-filename matches all):
[admin-tim@MikroTik] /ip tftp add ip-address=172.16.8.0/24 real-filename=tftp

Configure DHCP for PXE booting:
[admin-tim@MikroTik] /ip dhcp-server network set [ find address=172.16.8.0/24 ] boot-file-name=pxelinux.0 next-server=172.16.8.1

April 26, 2017 06:53 AM

Tim KentInstalling Raspbian on the Raspberry Pi 3 using raspbian-ua-netinst

I really like using the Raspbian unattended netinstaller (raspbian-ua-netinst) for doing headless installs of Raspbian to Raspberry Pi devices. You pretty much write the installer image to SD, create a configuration file, then insert the SD into the Pi and let it do the rest.

I wasn't able to install Raspbian to my Raspberry Pi 3 using the current latest build (1.0.9) of raspbian-ua-netinst as it still lacks support for this newer hardware.

Below is a quick guide on what I did to get it up and running successfully. I ran this from a Raspberry Pi but you could just as easily use any Linux machine:

Pull down the v1.1.x branch from GitHub:
$ git clone -b v1.1.x https://github.com/debian-pi/raspbian-ua-netinst.git

Download and build:
$ cd raspbian-ua-netinst
$ ./build.sh

Create the images you can then write to SD, this requires root for the loopback setup:
$ sudo ./buildroot.sh

If using a Raspberry Pi with limited swap like myself, you may get an error when creating the xz archive due to it being unable to allocate sufficient memory to xz. It's no problem as you can use the uncompressed or bz2 image.

As an example you could run bzcat raspbian-ua-netinst-20170426-gited24416.img.bz2 redirected to the destination SD card (the card itself, not a partition device).

Hopefully this post will be redundant soon when a newer raspbian-ua-netinst is released with Raspberry Pi 3 support, but until then I hope this is useful to someone!

April 26, 2017 06:53 AM

Tim KentInstalling Debian on the APU2

This is a short post detailing the install of Debian on the PC Engines APU2 using PXE.

First of all you'll need to ensure you are running version 160311 or newer BIOS. You can find the BIOS update details here. If the PXE options are missing then there's a good chance you aren't running a new enough BIOS!

Connect to the system's console via the serial port using a baud rate of 115,200. I typically use screen on Linux/macOS or PuTTY on Windows.

Start the APU2 and press Ctrl+B when prompted to enter iPXE, or choose iPXE from the boot selection menu (F10).

Attept boot from PXE using DHCP:
iPXE> autoboot

If all is well you will get to the "Debian GNU/Linux installer boot menu" heading, press TAB to edit the Install menu entry.

This should bring up something along the lines of:
> debian-installer/amd64/linux vga=788 initrd=debian-installer/amd64/initrd.gz --- quiet

You'll want to define the serial console by adding the console parameter to the end (and preseed parameter if used):
> debian-installer/amd64/linux vga=788 initrd=debian-installer/amd64/initrd.gz --- quiet console=ttyS0,115200

Press enter and you should be on your way!

April 26, 2017 06:53 AM

April 19, 2017

Ben MartinCNC Alloy Candelabra

While learning Fusion 360 I thought it would be fun to flex my new knowledge of cutting out curved shapes from alloy. Some donated LED fake candles were all the inspiration needed to design and cut out a candelabra. Yes, it is industrial looking. With vcarve and ball ends I could try to make it more baroque looking, but then that would require more artistic ability than a poor old progammer might have.


It is interesting working out how to fixture the cut for such creations. As of now, Fusion360 will allow you to put tabs on curved surfaces, but you don't get to manually place them in that case. So its a bit of fun getting things where you want them by adjusting other parameters.

Also I have noticed some issues with tabs on curves where exact multiples of layer depth align perfectly with the top of the tab height. Making sure that case doesn't happen makes sure the resulting undesired cuts don't happen. So as usual I managed to learn a bunch of stuff while making something that wasn't in my normal comfort zone.

The four candles are run of a small buck converter and wired in parallel at 3 volts to simulate the batteries they normall run of.

I can feel a gnarled brass candle base coming at some stage to help mitigate the floating candle look. Adding some melted real wax has also been suggested to give a more real look.

April 19, 2017 08:03 AM

March 28, 2017

Blue HackersBusyness: A Modern Health Crisis | LinkedIn

Benjamin Cardullo writes about an issue that we really have to take (more) seriously.  Particularly with mobile devices enabling us to be “connected” 24/7, being busy (or available) all of that time is not a good thing at all.

How do we measure professional success? Is it by the location of our office or the size of our paycheck? Is it measured by the dimensions of our home or the speed of our car? Ten years ago, those would have been the most prominent answers; however, today when someone is really pulling out the big guns, when they really want to show you how important they are, they’ll tell you all about their busy day and how they never had a moment to themselves.

Read the full article: https://www.linkedin.com/pulse/busyness-modern-health-crisis-benjamin-cardullo

March 28, 2017 01:35 AM

March 25, 2017

Adrian SuttonUsing WebPack with Buck

I’ve been gradually tidying up the build process for UI stuff at LMAX. We had been using a mix of requires and browserify – both pretty sub-optimally configured. Obviously when you have too many ways of doing things the answer is to introduce another new way so I’ve converted everything over to webpack.

Situation: There are 14 competing standards. We need to develop one universal standard that overs everyone's use cases. Situation: there are 15 competing standards...

Webpack is most often used as part of a node or gulp/grunt build process but our overall build process is controlled by Buck so I’ve had to work it into that setup. I was also keen to minimise the amount of code that had to be changed to support the new build process.

The final key requirement, which had almost entirely been missed by our previous UI build attempts was the ability to easily create reusable UI modules that are shared by some, but not all projects. Buck shuns the use of repositories in favour of a single source tree with everything in it so an internal npm repo wasn’t going to fly.

While the exact details are probably pretty specific to our setup, the overall shape of the build likely has benefit.  We have separate buck targets (using genrule) for a few different key stages:

Build node_modules

We use npm to install third party dependencies to build the node_modules directory we’ll need. We do this in an offline way by checking in the node cache as CI doesn’t have internet access but it’s pretty unsatisfactory. Checking in node_modules directory was tried previously but both svn and git have massive problems with the huge numbers of files it winds up containing.

yarn has much better offline support and other benefits as well, but it’s offline support requires a cache and the cache has every package already expanded so it winds up with hundreds of thousands of files to check in and deal with. Further investigations are required here…

For our projects that use Angular 2, this is actually the slowest part of our entire build (UI and server side). rxjs seems to be the main source of issues as it takes forever to install. Fortunately we don’t change our third party modules often and we can utilise the shared cache of artefacts so developers don’t wind up building this step locally too often.

Setup a Workspace and Generate webpack.config.js

We don’t want to have to repeat the same configuration for webpack, typescript, karma etc for everything UI thing we build. So our build process generates them for us, tweaking things as needed to account for the small differences between projects. It also grabs the node_modules from the previous step and installs any of our own shared components (with npm install <local/path/to/component>).

Build UI Stuff

Now we’ve got something that looks just like most stand-alone javascript projects would have – config files at the root, source ready to combine/minify etc. At this point we can just run things out of node_modules. So we have a target to build with ./node_modules/.bin/webpack, run tests with ./node_modules/.bin/karma or start the webpack dev server.

Buck can then pick up those results and integrate them where they’re needed in the final outputs ready for deployment.

March 25, 2017 09:58 AM

March 19, 2017

Adrian SuttonFinding What Buck Actually Built

Buck is a weird but very fast build tool that happens to be rather opaque about where it actually puts the things you build with it. They wind up somewhere under the buck-out folder but there’s no guarantee where and everything under there is considered buck’s private little scratch pad.

So how do you get build results out so they can be used? For things with built-in support like Java libraries you can use ‘buck publish’ to push them out to a repo but that doesn’t work for things you’ve built with a custom genrule. In those cases you could use an additional genrule build target to actually publish but it would only run when one of it’s dependencies have changed. Sometimes that’s an excellent feature but it’s not always what you want.

Similarly, you might want to actually run something you’ve built. You can almost always use the ‘buck run’ command to do that but it will tie up the buck daemon while it’s running so you can’t run two things at once.

For ultimate flexibility you really want to just find out where the built file is which thankfully is possible using ‘buck targets –show-full-output’. However it outputs both the target and it’s output:

$ buck targets --show-full-output //support/bigfeedback:bigfeedback
//bigfeedback:bigfeedback /code/buck-out/gen/bigfeedback/bigfeedback.jar

To get just the target file we need to pipe it through:

cut -d ' ' -f 2-

Or as a handy reusable bash function:

function findOutput() {
    $BUCK targets --show-full-output ${1} | cut -d ' ' -f 2-
}

 

March 19, 2017 05:30 PM

March 12, 2017

Adrian SuttonReplacing Symlinks with Hardlinks

Symlinks have been causing me grief lately.  Our build tool, buck, loves creating symlinks but publishes corrupt cache artefacts for any build rule that includes a symlink amongst it’s output.

We also wind up calling out to npm to manage JavaScript dependencies and it has an annoying (for us) habit of resolving symlinks when processing files and then failing to find required libraries because the node_modules folder was back where the symlink was, not with the original file. Mostly this problem is caused by buck creating so many symlinks.

So it’s useful to be able to get rid of symlinks which can be done with the handy -L or –dereference option to cp. Then instead of copying the symlink you copy the file it points to. Avoids all the problems with buck and npm but wastes lots of disk space and means that changes to the original file are no longer reflected in the new copy (so watching files doesn’t work).

Assuming our checkout is on a single file system (which seems reasonable) we can get the best of both worlds by using hard links.  cp has a handy option for that too -l or –link. But since buck gave us a symlink to start with it just gives us a hard link to the symlink that points to the original file.

So combining the two options, cp -Ll, should be exactly what we want. And if you’re using coreutils 8.25 or above it is. cp will dereference the symlink and create a hard link to the original file. If you’re using coreutils prior to 8.25 cp will just copy the symlink. Hitting a bug in coreutils is pretty much the definition of the world being out to get you.

Fortunately, we can work around the issue with a bit of find magic:

find ${DIR} -type l -exec bash -c 'ln -f "$(readlink -m "$0")" "$0"' {} \;

‘find -type l’ will find all symlinks.  For each of those we execute some bash, reading from inside out, to deference the symlink with readlink -m then use ln to create a hard link with the -f option to force it to overwrite the existing symlink.

Good times…

March 12, 2017 02:42 PM

March 05, 2017

Ben MartinNon self replicating reprap 3d printer

The reprap is designed to be able to "self replicate" to a degree. If a part on a reprap 3d printer breaks then a replacement part can be printed and attached. Parts can evolve as new ideas come along. Having parts crack or weaken on a 3d printer can be undesirable though.

A part on this printer was a mix of acrylic and PLA, both of which were cracked. Not quite what one would hope for as a foot of the y-axis. It is an interesting design with the two driving rods the same length as the alloy channel at the back of the printer.



A design I thought of called for 1/2 inch alloy in order to wrap the existing alloy extrusion with a 3mm cover. The dog bone on the slot is manually added in Fusion 360 so it is larger than needed. The whole thing being a learning exercise for me as to how to create 2.5D parts. The belt tensioning is on a 6mm subassembly that is mounted on the bracket in the right of the image below.


The bracket and subassembly are shown mounted below. Yes, using four M6 bolts to tension a belt is overkill. I would imagine you can stretch the belt to breaking point quite easily with these bolts. The two rods are locked into place using M3 tapped grub screws. The end brackets are bolted to the back extrusion using two M6 bolts.


The z-axis is now supported by a second 10mm alloy custom bracket. This combination makes it much, much harder to wobble the z-axis than the original design using plastic parts.




March 05, 2017 08:21 AM

February 21, 2017

Tony BilbroughBack to School

OK after 3 years 2 weeks and 3 days, I have forgotten every thing I learned about a blogg. Never touched the thing. Spent all my time ogling Facebook, and creating nothing.

So it’s  back to school, back to the utter basics. Back to rush hour train schedules and back to late night dinners! Indeed, its back to The Edge and the company of about twenty other struggling students.

Later – after class I will try and add a photo.

 


February 21, 2017 09:16 AM

February 12, 2017

Ben MartinPrinter bracket fix

Similar to many 3d printer designs, many of the parts on this 3d printer are plastic. Where the Z-Axis meets the Y-Axis is held in place by two top brackets (near the gear on the stepper is a bolt to the z alloy extrusion) and the bottom bracket. One flaw here is that there are no bolts to the z-axis on the bottom bracket. It was also cracked in two places so the structural support was low and the x-axis would droop over time. Not so handy.


The plastic is about 12mm thick and smells like a 2.5D job done by a 3d printer 'just because'.  So a quick tinker in Fusion 360 and the 1/2 inch thick flatland part was born. After removing the hold down tabs and flapping the remains away 3 M6 bolt holds were hand drilled. Notice the subtle shift on the inside of the part where the extrusion and stepper motor differ in size.


It was quicker to just do that rather than try to remount and register on the cnc and it might not have even worked with the limited z range of the machine.


The below image only has two of the three bolts in place. With the addition of the new bolt heading into the z axis the rigidity of the machine went right up. The shaft that the z axis is mounted onto goes into the 12mm empty hole in the part.


This does open up the mental thoughts of how many other parts would be better served by not being made out of plastic.


February 12, 2017 11:01 AM

January 29, 2017

Clinton RoySouth Coast Track Report

Please note this is a work in progress

I had previously stated my intention to walk the South Coast Track. I have now completed this walk and now want a space where I can collect all my thoughts.

Photos: Google Photos album

The sections I’m referring to here come straight from the guide book. Due to the walking weather and tides all being in our favour, we managed to do the walk in six days. We flew in late on the first day and did not finish section one of the walk, on the second day we finished section one and then completed section two and three. On day three it was just the Ironbound range. On day four it was just section five. Day five we completed section six and the tiny section seven. Day six was section eight and day seven was cockle creak (TODO something’s not adding up here)

The hardest day, not surprisingly, was day three where we tackled the Ironbound range, 900m up, then down. The surprising bit was how easy the ascent was and how god damn hard the descent was. The guide book says there are three rest camps on the descent, with one just below the peak, a perfect spot for lunch. Either this camp is hidden (e.g. you have to look behind you) or it’s overgrown, as we all missed it. This meant we ended up skipping lunch and were slipping down the wed, muddy awful descent side for hours. When we came across the mid rest camp stop, because we’d been walking for so long, everyone assumed we were at the lower camp stop and that we were therefore only an hour or so away from camp. Another three hours later or so we actually came across the lower camp site, and the by that time all sense of proportion was lost and I was starting to get worried that somehow we’d gotten lost and were not on the right trail and that we’d run out of light. In the end I got into camp about an hour before sundown (approx eight) and B&R got in about half an hour before sundown. I was utterly exhausted, got some water, pitched the tent, collapsed in it and fell asleep. Woke up close to midnight, realised I hadn’t had any lunch or dinner, still wasn’t actually feeling hungry. I forced myself to eat a hot meal, then collapsed in bed again.

TODO: very easy to follow trail.
TODO: just about everything worked.
TODO: spork
TODO: solar panel
TODO: not eating properly
TODO: needing more warmth

I could not have asked for better walking companions, Richard and Bec.


Filed under: camping, Uncategorized

January 29, 2017 11:08 AM

January 23, 2017

Ben MartinOHC2017 zero to firmware in < 2 hours

I thought I'd make some modifications along the way in the build, so I really couldn't do a head to head with the build time I had heard about (a lowish number of minutes). The on/off switch being where it was didn't fit my plans so I made that an off boarder and also moved the battery to off board so that I might use the space below the screen for something, perhaps where the stylus lives in the case.


I did manage to go from opening the packet to firmware environment setup, built, and uploaded in less than 2 hours total. No bridges, no hassles, cable shrinks around the place and 90 degree headers across the bottom of the board for future tinkering.

This is going to look extremely stylish in a CNCed hardwood case. My current plan is to turn it into a smart remote control. Rotary encoder for volume, maybe modal so that the desired "program" can be selected quickly from a list without needing to flick or page through things.

January 23, 2017 12:06 PM

January 15, 2017

Blue HackersBlueHackers session at Linux.conf.au 2017

If you’re fortunate enough to be in Tasmania for Linux.conf.au 2017 then you will be pleased to hear that we’re holding another BlueHackers BoF (Birds of a Feather) session on Monday evening, straight after the Linux Australia AGM.

The room is yet to be confirmed, but all details will be updated on the conference wiki at the following address: https://linux.conf.au/wiki/conference/birds_of_a_feather_sessions/bluehackers/

We hope to see you there!

January 15, 2017 09:55 AM

January 09, 2017

Adrian SuttonBenq GW2765 Monitor Display Port “No Signal Detected”

I have three Benq GW2765 monitors which periodically report “No Signal Detected” for DisplayPort even when the computer it’s attached to recognises the monitor is present (displaying it in the monitors/displays list etc). Changing  the DisplayPort cable or plugging it into a different computer doesn’t help (I tried with both Mac OS X and Linux/Fedora machines), but HDMI and D-Sub connections work perfectly (but can’t support the full screen resolution). I can even disconnect a cable from a working monitor, plug it into a non-working monitor and it will continue to complain about no signal, but plug the cable back into the working monitor and it carries on working fine.

The solution turns out to be quite simple – unplug it, wait for the power indicator light to turn off and turn it back on. It will then detect the DisplayPort signal correctly.  Unplugging the DisplayPort cable and plugging it back in will not help, nor will turning the monitor off with it’s power button. Briefly disconnecting the power cable and reconnecting it isn’t enough, you have to wait 5-10 seconds for the power indicator light to turn off.

Naturally that means that you can do all kinds of due diligence testing at home before deciding it’s a hardware problem and returning it to the shop. When you get to the shop it will work perfectly because it’s been unplugged on the car trip.

So that was fun…

January 09, 2017 01:15 AM

January 08, 2017

Ben FowlerPimp my Shell, 2017 Edition

Iterm2 running on macOS Sierra. Zsh and Prezto for the shell. I use tmux for session management, with a heavily modified configuration. The statusbar has been styled to match the Solarized Dark theme used everywhere. In the status bar, I have a biff indicator which disappears when there's no unread mail. I've also got build status indicators, which lets me see at a glance, when somebody breaks

January 08, 2017 01:32 AM

January 07, 2017

Ben MartinMachine Control with MQTT

MQTT is an open standard for message passing in the IoT. If a device or program knows something interesting it can offer to publish that data through a named message. If things want to react to those messages they can subscribe to them and do interesting things. I took a look into the SmoothieBoard firmware trying to prize an MQTT client into it. Unfortunately I had to back away at that level for now. The main things that I would love to have as messages published by the smoothie itself are the head position, job processing metadata, etc.

So I fell back to polling for that info in a little nodejs server. That server publishes info to MQTT and also subscribes to messages, for example, to "move the spindle to X,Y" or the like. I thought it would be interesting to make a little web interface to all this. Initially I was tempted to throw over websockets myself, but then discovered that you can mqtt right over a ws to mosquitto. So a bootstrap web interface to the CNC was born.



As you can see I opted out of the pronterface style head control. For me, on a touch panel the move X by 1 and move X by 10 are just too close in that layout. So I select the dimension in a tab and then the direction with buttons. Far, far, less chance of an unintended move.

Things get interesting on the files page. Not only are the files listed but I can "head" a file and that becomes a stored message by mosquitto. As the files on the sdcard of the smoothieboard don't change (for me) the head only has to be performed once per file. It's handy because you can see the header comment that the CAM program added to the G-Code so you can work out what you were thinking at the time you made the gcode. Assuming you put the metadata in that is.

I know that GCode has provisions for layout out multiple coordinate spaces for a single job. So you can cut 8 of the same thing at a single time from one block of stock. I've been doing 2-4 up manually. So I added a "Saves" tab to be able to snapshot a location and restore to it again later. This way you can run a job, move home by 80mm in X and run the same job again to cut a second item. I have provision for a bunch of saves, but only 1 is shown in the web page in the below.




This is all backed by MQTT. So I can start jobs and move the spindle from the terminal, a phone, or through the web interface.


January 07, 2017 07:58 AM

January 01, 2017

Ben MartinKeeping an eye on it

The CNC enclosure now sports a few cameras so I can keep an eye on things from anywhere. The small "endocam" mounting worked out particularly well. The small bracket was created using 2mm alloy, jigsawed, flapped, drilled and mounted fairly quick. These copper coated saddle clamps also add a look good factor to the whole build.



A huge plus side is that I now also have a good base to bolt the mist unit onto. It is tempting to redesign the camera mounting bracket in Fusion and CNC a new one in 6mm alloy but there's no real need for this purpose. Shortest effective path to working solution and all that.

January 01, 2017 11:04 PM

December 27, 2016

Ben MartinFirst alloy on the 3040 cnc (with 2.2kw spindle)

There are times when words are not needed. When you see a 3040 or 6040 cnc without any enclosure there is a good chance that the machine doesn't see heavy alloy cutting. It only takes a few videos to see how chips are thrown around when a 24krpm bit touches a block of alloy. As a prelude to any alloy being cut I enclosed the 3040 in a "terrarium". This was itself an interesting build and as usual I overdid the design. The top and bottom box frames are made of 5cm square timber with a fairly solid base panel. The back is just light junk with plywood bolted to tabs on each side so I can replace things as I feel. The door opens beyond 90 degrees to get right out of the way and closes to rest on the base 5cm timber at the front of the enclosure.


For anybody reading this I have one word of advice, any gaps in the first 50cm from the machine base will have chips thrown at them. So make sure that the angles the chips might come from near the spindle have been accounted for with your air venting that allows some cooling into the mix. The sides of this case are more than 80cm in height.

The next modification is a mister to help clear local chips and bring some light amount of cutting fluid into the cut zone. The first runs were just using a light spray of CDT over the cut zone before job start.

The very end of one of the first runs is shown in the below video.

<iframe allowfullscreen="" frameborder="0" height="315" src="https://www.youtube.com/embed/jO7TZqIlNyE" width="560"></iframe>

The parts being cut are wheel mount crossover plates to allow an outdoor robot to have larger wheels attached. The wheels want M8 bolts, the motor mount is an actobotics pattern, so an M4 hole was a good fit there. Because it's CNC the part itself was cut with many splines to include material where it could do structural good and exclude it otherwise.

I found it useful to cut templates in MDF to test the fit before a final run. This fed into part 3 which includes mounting holes for all 4 bolts of the hub mount. The alloy version 4 also has rounded ends and is shown attached to the wheel. This will let some cheap $10 wheels which are 12 inch across mount to an actobotics based robot.


I'll have video of the "houndbot" in action using these mounts next time.

December 27, 2016 08:56 AM

December 12, 2016

Ben Martin3040/24,000 CNC first dry run in place

The progression has finally reached an upgraded CNC with high power spindle. Things still move around fine to the eye, the next step is likely to do some test drills at known distances to see if the additional weight has had an impact on the steppers that can't be easily seen.


A few interesting times when spinning up to 24,000. At around 320hz there was a new loud rattle. I think this turned out to be resonance with either something that was on the cutting plate or the washers on the toggle clamps.

There is going to be video once this machine starts eating alloy. The CNC needs to be lowered into an enclosure (the easier part) so that chips and the like go into a known location. The enclosure itself needs to be made first ;)

Ironically a future goal is to be going smaller. Seeing if twice the number of microsteps can be pulled off in order to get better precision and cut QFN landing zones on PCBs.

December 12, 2016 02:44 AM

December 09, 2016

Ben Martin3040 spindle upgrade: the one day crossover plate

Shown below is the spindle that came with my 3040 "engraving" cnc next to the 2.2kw water cooled monster that I am upgrading to. See my previous blog post for videos of the electronics and spindle test on the bench.


The crossover plate which I thought was going to be the most difficult part was completed in a day. I had some high torsion M6 nuts floating around with one additional great feature, the bolt head is nut shaped giving a low clearance compared to some bolts like socket heads. The crossover is shown from the top in the below image. I first cut down the original spindle mount and sanded it flat to make the "bearing mount" as I called it. Then the crossover attaches to that and the spindle mount attaches to the crossover.

Notice the bolts coming through to the bearing mount. The low profile bolt head just fits on each side of the round 80mm diameter spindle mount. I did have to do a little dremeling out of the bearing mount to fit the nuts on the other side. This was a trade off, I wanted those bolts as far out from the centre line as possible to maximize the possibility that the spindle mount would bolt on flat without interfering with the bolts that attach the crossover to the bearing mount.



A more side profile is shown below. The threaded rod is missing for the z-axis in the picture. It is just a test fit. I may end up putting the spindle in and doing some "dry runs" to make sure that the steppers are happy to move the right distances with the additional weight of the spindle. I did a test run on the z-axis before I started, just resting the spindle on the old spindle and moving the z up and down.



I need to drop out a cabinet of sorts for the cnc before getting into cutting alloy. The last thing I want is alloy chips and drill spirals floating around on the floor and getting trecked into other rooms.

December 09, 2016 08:43 AM

December 08, 2016

Ben Martin3040 for alloy

I have finally fired up a 2.4kw 24,000 rpm spindle on the test bench. This has water cooling and is VFD controlled. The spindle runs on 3 phase AC power.

<iframe allowfullscreen="" frameborder="0" height="315" src="https://www.youtube.com/embed/NwBygDgtKek" width="560"></iframe>

One thing that is not mentioned much is that the spindle itself and bracket runs to around 6-7kg. Below is the spindle hitting 24,000 rpm for the first time.

<iframe allowfullscreen="" frameborder="0" height="315" src="https://www.youtube.com/embed/uQfg2fOTMF0" width="560"></iframe>
With this and some other bits a 3040 should be able to machine alloy.

December 08, 2016 09:02 AM

December 06, 2016

Adrian SuttonFun with Nvidia Drivers and Fedora Upgrades

After any major Fedora upgrade my system loses the proprietary nvidia drivers that make X actually work (I’ve never successfully gotten the nouveau drivers to handle my card and multi-monitor setup) so the system reboots and simply presents an “oops, something went wrong” screen.

The issue seems to be that the nvidia driver doesn’t recompile for the new kernel, despite the fact that I’m using akmod packages which should in theory automatically recompile for new kernels.

The tell-tale sign is:

[   161.484] (II) LoadModule: "nv"
[   161.484] (WW) Warning, couldn't open module nv
[   161.484] (II) UnloadModule: "nv"
[   161.484] (II) Unloading nv
[   161.484] (EE) Failed to load module "nv" (module does not exist, 0)

in the Xorg logs.

Some digging reveals that the akmod recompilation process should be triggered by /etc/kernel/postinst.d/akmodsposttrans but for whatever reason that didn’t run.

The key piece of that script was running akmods similar to:

/usr/sbin/akmods --from-kernel-posttrans --kernels 4.8.11-300.fc25.x86_64

The last argument is the current kernel version, which should match the directory name in /lib/modules/ – there will likely be a few options, either run the command for each of them or pick the latest which is likely to be the one missing the nvidia drivers.

Run that script, reboot and everything came back just fine, though there is likely a better way to do it…

December 06, 2016 09:02 PM

November 01, 2016

Ben MartinHoundbot progresses

All four new shocks are now fitted! The tires are still deflated so they look a little wobbly. I ended up using a pillow mount with a 1/4 inch channel below it. The pillow is bolted to the channel from below and the channel is then bolted from the sides through the alloy beams. The glory here is that the pillows will never come off. If the bolts start to vibrate loose they will hit the beam and be stopped. They can not force the pillow mount up to get more room because of the bolts securing the 1/4 inch channel to the alloy beams coming in from the sides.


I'm not overly happy with the motor hub mount to wheel connection which will be one of the next points of update. Hopefully soon I will have access to a cnc with a high power spindle and can machine some alloy crossover parts for the wheel assembly. It has been great to use a dual vice drill and other basic power and hand tools to make alloy things so far. But the powerful CNC will open the door to much 2.5D stuff using cheapish sheet alloy.

But for now, the houndbot is on the move again. No longer to the wheels just extend outward under load. Though I don't know if I want to test the 40km/h top speed without updating some of the mountings and making some bushings first.


November 01, 2016 10:08 AM

October 25, 2016

Adrian SuttonTesting@LMAX – Screenshots with Selenium/WebDriver

When an automated UI test fails, it can be hard to tell exactly what went wrong just from the failure message. The failure message typically just says that some element the test was looking for wasn’t found, but it doesn’t tell you what was there.  Was there an error message displayed instead? Was the operation still executing? Did something completely unexpected happen instead?

To answer those questions our DSL automatically captures a screenshot when any UI operation fails and we include a link to it in the failure message. That way when someone reviews the test result they can see exactly what was on screen which typically makes it straight forward to identify what went wrong and fix it.

Until recently we’d been using the convenient and helpful looking TakesScreenshot.getScreenshotAs method that WebDriver provides.  For example:

((TakesScreenshot)webDriver).getScreenshotAs(
new SaveScreenshotOutputType(pngFilename));

As expected, this creates a PNG image in the specified location that looks for all the world like a screenshot of the browser content. Unfortunately, it’s lying.

WebDriver actually does something very clever and gets the browser to render the page content into a canvas element and then saves that as the PNG file. This is an extremely close approximation of what the page looks like with two important exceptions:

  1. It doesn’t respect the viewport size so body content is never scrolled off-screen.
  2. Any browser chrome or random other windows that have popped up aren’t shown.

Both of these things can be an issue – the scrolled-off-screen one being the most problematic.  Modern WebDriver quite accurately simulates a user clicking and typing keys so if somethings not on screen it can’t be clicked. When your test fails because an element was “present but not visible” and the screenshot shows it as very clearly visible, hilarity ensues. Very frustrating hilarity.

To fix this we’ve started taking honest-to-goodness screenshots. Since all our tests get their own X session (courtesy of vncserver) their windows are completely isolated from each other and a dump of the entire screen will capture precisely what a real user would see, browser chrome and scrolling included. Linux provides an entertaining array of options for capturing screenshots from the command line but the one that happened to be already installed was import, part of the ImageMagick suite. We simply execute:

import -display :20 -window root screenshot.png

where :20 is the X display this particular test has been allocated and screenshot.png is where we want the screenshot to wind up.

Since the WebDriver screenshot can be useful as well – for example finding out an error message is displayed at the top of the screen  we continue to grab that too.

Finally, for completeness we grab a dump of the DOM to a HTML file so we can later inspect what IDs, classes, attributes etc are present, including any hidden elements. webDriver.getPageSource() makes that easy and we append an extra HTML comment that includes webDriver.getCurrentUrl() for good measure.

October 25, 2016 06:47 AM

Adrian SuttonTesting@LMAX – Isolate UI Tests with vncserver

One reason that automated UI tests can be unreliable is that they tend to be sensitive to what else is on screen at the time and even things like the current screen size. Developers running the tests locally also find it annoying to have windows opening and closing on their machine while the test runs and are unable to do anything else because their clicking might interfere with the test.

At LMAX we solve that by isolating tests in their own X session, created using vncserver. We simply start vncserver with:

vncserver :20 -geometry 1600x1200

Then set DISPLAY=:20 as an environment variable when starting WebDriver’s Firefox instance:

FirefoxBinary firefoxBinary = new FirefoxBinary();
firefoxBinary.setEnvironmentProperty("DISPLAY", ":20");

The Firefox window then pops up in it’s own isolated X session. We can still use a vnc client to watch as the test runs but we can also let it run in the background and continue using the machine for other things. In CI it allows us to run UI tests on a headless server.

Since we run a number of tests in parallel, in CI we start a number of vncserver instances and allocate a different one to each running test to ensure they’re completely isolated.

Simple, but incredibly effective.

October 25, 2016 06:46 AM

October 16, 2016

Clinton RoyIn Memory of Gary Curtis

This week we learnt of the sad passing of a long term regular attendee of Humbug, Gary Curtis. Gary was often early, and nearly always the last to leave.

One  of Gary’s prized possessions was his car, more specifically his LINUX number plate. Gary was very happy to be our official airport-conference shuttle for linux.conf.au keynote speakers in 2011 with this number plate.

Gary always had very strong opinions about how Humbug and our Humbug organised conferences should be run, but rarely took to running the events himself. It became a perennial joke at Humbug AGMs that we would always nominate Gary for positions, and he would always decline. Eventually we worked out that Humbug was one of the few times Gary wasn’t in charge of a group, and that was relaxing for him.

A topic that Gary always came back to was genealogy, especially the phone app he was working on.

A peculiar quirk of Humbug meetings is that they run on Saturday nights, and thus we often have meetings at the same time as Australian elections. Gary was always keen to keep up with the election on the night, often with interesting insights.

My most personal memory of Gary was our road trip after OSDC New Zealand, we did something like three days of driving around in a rental car, staying at hotels along the way. Gary’s driving did little to impress me, but he was certainly enjoying himself.

Gary will be missed.

 


Filed under: Uncategorized

October 16, 2016 04:39 AM

September 19, 2016

Sarah Smith - Game BlogBurn your GANTT Charts & Deliver Game Releases Like a Boss

We all want to have our teams be treated as the awesome creative people they are, but there's deadlines and as producers & studio founders responsible for making sure the place stays afloat, how can we deliver our games on schedule and not put our peoples feet to the fire?



How about setting fire to your schedule for a damn start?  We all know the dates on those things are a straight-up fiction!  They weren't working anyway right?  The reason they fail is because of the Planning Fallacy - humans are fundamentally incapable of making accurate estimates of work to be done ahead of time.

There's alternative approaches where torching those old traditional MS Project style schedules may well be a good thing, especially if your studio is looking for a clean slate on your team dynamics.  My talk at Seattle's Mobile Game Forum on October 18th 2016 deals with this topic and I'm going to throw a few spoilers here (the image above is a slide from that talk) - but go see the talk if you can, as the full content will be there.

The different way of working is a methodology developed over a decade of working with creative teams.  I use it today at my studio Smithsoft Games.  At its core it's a "kind-of agile" modified for the small, distributed, start-up teams of frequent collaborators which typify my teams.
What works well for my team is ideas that are raided from the agile camp.  
I don't necessarily buy into the whole agile manifesto - because my teams & projects are usually distributed and multi-disciplinary so not always face-to-face.  We respond to players (our customers) but they are not around the table as we typically use metrics & the occasional field test session - so we don't have a customer figure handy as required by agile.

Instead I have found that what works well is to know the agile principles and use their best ideas as far as possible while working in a way that is resilient against breakdowns.  I call this "raiding agile".

Backlogs are the best idea ever, a great takeaway from the agile camp & the first and best step you can take along the road to delivering your game without crushing your people in a schedule hell is to switch to backlog driven development.

The basic approach we take (raiding agile) can be broken down into 3 parts:


To get your head around the approach we take, basically just use one simple mnemonic - it all comes down to people-power: free your people & empower them to solve the problem of planning your project; communicate constantly and excellently with your people and gamify the planning using backlogs so that every day your people are doing the planning work of keeping your project on track and having fun while they do it.

So how does it work in practice?

Unleash Your People


James Everett talks on "Trust in Game Design NZGDC 2016"
Unleashing your people is about trust.  Trust is a top item in the agile manifesto and something crucial to getting your team working.  Top designers & producers in game development already know this even if they don't use agile.

In the seminal 1999 book “Peopleware: Productive Projects and Teams” Di Marco and Lister explain that management’s job is not to make people work.  Instead your job is to make it possible for them to work.

As a leader, you work to build trust: work to be in a situation where the people who work on your team trust you, and more importantly you trust them.

Your success will be defined not by how carefully you lay your plans; but by how well you recover when they go wrong.  And you will go wrong.  When it happens, your team will be there; if you have built trust.

If you try to impose control from the top, it amounts to a lack of trust that people will work what’s needed for the project.  Having your people waiting around for sign-off, when they could be working is waste.

Having people get into what I call the “Inbox” mentality where they live their work lives like a rat in a cage pressing a bar, checking for the next item of work to be given to them by their supervisor.  That is not how lean effective teams work.  Companies that operate via their inbox, and pleasing bosses, lose the ability to respond quickly; they sacrifice agility and vibrancy for no actual gain.

Tool up your Communication


OK, so we’re going to TRUST our people, empower them to work on the project.  How do we as leaders make sure that the project is on track then?  Isn’t it our job to motivate them?

Imagine the throughput of your team is defined by this triangle - in this model the sides remain in constant proportion.


Once you have hired someone competence is fixed. Lets assume you have a team and you want to make the best of them.

Motivation - You could be forgiven for thinking that comes from being paid.  Some think in a dated way, an authoritarian way that motivation must be imposed from the top.  Let me tell you for creative people it comes from a raft of things, such as accomplishment, and recognition.

But in my experience its the communication which winds up being the restricting factor MOST often.  If you improve communication that you will begin to reach the potential of the motivated competent team.

Technical projects have failures - but those are due to human reasons.  Most often from a lack of shared understanding.  The best bang for your buck, for improving team performance, is time spent on communication.  And that means written, and verbal, charts on the wall, wiki pages - anything that gets a shared message across.

Your job as leader is to filter outside impacts that can derail the team, simplify the view they have of the overall product and provide a compass set on true north.  Provide a consistent drum-beat that gives the project a cadence.  Through that repeated message infect the team with your own enthusiasm for the mission of the immediate project goals.

“We are going to ship the beta in March” - keep talking about what that will mean, and what it will look like.  Check back to see if people understand what that goal is - when it bites, what counts as success.

Have GBC - a great big chart stuck physically to the wall that shows that goal.  If I walk up to one of your team members and ask what is the next big milestone, whats in it and when is it due?  It should roll off their tongue instantly - if not, then that is on you.

Be Precise & Specific

Systematize names for things and use them across the board.  Don’t use vague words like “the build” or worse “IT”.  Make sure your tools, your plans and your verbal communications always use the same terms.  People cannot see inside your head.
When will “IT” be ready?  
Is “IT” done yet?
 If you ask an artist working on a concept for a character when will “IT” be ready, does that mean the concept, or a set of poses, and clothes?  Use the milestone and feature names to avoid the “IT” problem.

Make Sure Communication is Two-Way

So you’re communicating clearly with your trusted team.  Is that communication two-way?  Trust comes when your team knows that they can tell you ANYTHING without fear.  That comes when you admit you don’t know.

Google’s 3 year long study Project Aristotle set out in 2012 to see what made the best teams.  Their leads had believed conventional wisdoms like the teams that did the best were the ones that had the most talented members.  But it had not been studied to find out how true that was.

Turns out that people needed Psychological Safety which is ‘‘a sense of confidence that the team will not embarrass, reject or punish someone for speaking up’’

As a leader you have to make sure that members of your team are rewarded for contributing their slice of wisdom.

Have you heard a conversation where there’s that one person who “knows more” than every one else?  Others should “shut up” because the authority is talking?

Knowledge is not a high-watermark.  Even if someone is still learning compared to an experienced person remember this: the contributions of even the most inexperienced member of your team is never submerged by another.  Team members contributions overlap. 

Make sure their slice of wisdom is heard.  They may have had the one vital clue for the success of a critical part of the project. 

Use Tools to Help Make This Happen

How can ensure that all team members are contributing in that communication two-way street?  Obvious ways are to show leadership in meetings.  Make sure your loud confident types cede the floor and don’t interrupt.  Try a round-table technique for your stand-ups and other essential meetings.  Use a “speakers totem” if necessary to shut up repeated interrupters.

Even better - and this is a nice segue into the next of the 3 tricks - consider using tools that gamify and level the playing field when it comes to meetings like the daily standup.  At Smithsoft we use the Slack on-line chat tool, with its scriptable bot, to manage our daily standup.  Wikis, source control, online-test-plans, and Drop Box all have their place too.  Trello and the Google suite of cloud-collaboration & document tools are also great.

The single best communication tool is working software.  An up-to-date version of the game, which everyone on the team can access.  Your tech team should prioritise what’s called “continuous integration” - which basically means automatically getting new builds when changes are made.

Team members who are remote or working remotely, or from home are not out of the loop.  Breakdowns & mismatches are avoided.

Yes - there’s value in face-to-face, or video conference hook-ups, but their usefulness has to be balanced with the time-wasting aspect.  Making people report in personally to a meeting like little tin-soldiers might give a feeling of control, but once you’ve dished out that sermon on the mount, how much of it actually sank in?  Meetings, and especially ones where the communication is mostly all one way is a number one time-waster that you can move away from as you pursue a lean and creative team methodology.

Gamify Your Planning

OK, you’ve unleashed your people and tooled up your communication.  Now you have everything buzzing, how to make sure you get the project delivered out of all that energy?

It turns out that there is one huge shift you can make with your project management; if you’re not doing this already, prioritize people over days.

They are not human resources.

At the beginning of a two week period, the team goes through one by one & makes a list of all the tasks they will commit to complete that period.  We call the period a “sprint”.  As far as possible teams should be multi-disciplinary with artists, technical artists, programmers, QA people and designers all working side by side on functional areas of the game.  This way they see & communicate issues in real time.

Team members only commit to what they know they can do, and they OWN that.  The whole team agrees that those tasks advance the mission of the project, by delivering value to your players.  Because you’ve communicated the mission to them, they know what success looks like, they know to measure each and every task against that vision to see if it is up to snuff; before adding it into the sprint.



The tasks then become cards in a game, played in rounds & turns.  This leads to ceremony around the working day which structures your teams work in a way that makes it obvious when the agreed upon plan is strayed away from.  The rules of the game help reinforce ways-of-working that stop the classic mistakes of human nature such as "The Planning Fallacy".

Well, you might now say, great - I’ve burned my GANTT chart and I don’t know what we’re delivering any more!

Actually you do.  Because you have used communications, with specific clear terminology, and you have working software, you know exactly what you’re delivering.  You see it every day.

Its the working software - the current build of the game.

And actually you know MORE - because you can look ahead and see where the trend line goes.  How many stories are we getting through?  This is the teams VELOCITY.  You can use a fancy graph like this one.  But the best thing is just to put a big chart on the wall and use a marker pen to keep it up to date.  Or just track the velocity as a number.  If you like you can break it down by team member.

How many tasks are left?  You know your delivery dates - simply draw a line across  your backlog at the cut off date and you can SEE what features are in and what don't make it.

https://github.com/sarah-j-smith/trelloburn


But I have to have my feature!  

OK, sure - time to horse trade, what other feature do you want to swap out so that the one you want in makes the cut?

Now you have a dialog that allows you to negotiate and still have a working game.

Closing:  People Before Tools

Beware of becoming seduced by tools.  A lot of managers I have worked with in the past will say things like "We're using Trello" or "We're using Jira" as though that completely describes the ways of working of the team or studio.

A burn-down chart like the one above is not your process.  If that starts to happen, go back to basics and use a number of a rough chart on the wall.  

The truth is not some graph produced by a tool - its what your people are producing and what the working software says.

Also I'm not saying this way is simple: its hard, and our teams still struggle every day to stay on target.  But at least by putting people first you as a leader, studio founder or producer are not trying to do all the planning with bogus numbers on a fictional GANTT chart, and you have help with planning from your team.

In closing I’d just like to summarise: you can beat the Planning Fallacy & other mistakes, by getting your team to divide a creative milestones into small, simple, same-sized, swappable stories stored into a prioritized backlog.  

Good luck and have fun!

September 19, 2016 06:34 AM

September 09, 2016

Ben MartinHoundbot suspension test fit

I now have a few crossover plates in the works to hold the upgraded suspension in place. See the front wheel of the robot on your right. The bottom side is held in place with a crossover to go from the beam to a 1/4 inch bearing mount. The high side uses one of the hub mount brackets which are a fairly thick alloy and four pretapped attachment blocks. To that I screw my newly minted alloy blocks which have a sequence of M8 sized holes in them. I was unsure of the final fit on the robot so made three holes to give me vertical variance to help set the suspension in the place that I want.



Notice that the high tensile M8 bolt attached to the top suspension is at a slight angle. In the end the top of the suspension will be between the two new alloy plates. But to do that I need to trim some waste from the plates, but to do that I needed to test mount to see where and what needs to be trimmed. I now have an idea of what to trim for a final test mount ☺.

Below is a close up view of the coil over showing the good clearance from the tire and wheel assembly and the black markings on the top plate giving an idea of the material that I will be removing so that the top tension nut on the suspension clears the plate.


 The mounting hole in the suspension is 8mm diameter. The bearing blocks are for 1/4 inch (~6.35mm) diameters. For test mounting I got some 1/4 inch threaded rod and hacked off about what was needed to get clear of both ends of the assembly. M8 nylock nuts on both sides provide a good first mounting for testing. The crossover plate that I made is secured to the beam by two bolts. At the moment the bearing block is held to the crossover by JB Weld only, I will likely use that to hold the piece and drill through both chunks of ally and bolt them together too. It's somewhat interesting how well these sorts of JB and threaded rod assemblies seem to work though. But a fracture in the adhesive at 20km/h when landing from a jump without a bolt fallback is asking for trouble.


The top mount is shown below. I originally had the shock around the other way, to give maximum clearance at the bottom so the tire didn't touch the shock. But with the bottom mount out this far I flipped the shock to give maximum clearance to the top mounting plates instead.


So now all I need is to cut down the top plates, drill bolt holes for the bearing to crossover plate at the bottom, sand the new bits smooth, and maybe I'll end up using the threaded rod at the bottom with some JB to soak up the difference from 1/4 inch to M8.

Oh, and another order to get the last handful of parts needed for the mounting.

September 09, 2016 02:48 AM

September 04, 2016

Ben MartinHoundbot rolling stock upgrade

After getting Terry the robot to navigate around inside with multiple Kinects as depth sensors I have now turned my attention to outdoor navigation using two cameras as sensors. The cameras are from a PS4 eye which I hacked to be able to connect to a normal machine. The robot originally used 5.4 inch wheels which were run with foam inside them. This sort of arrangement can be seen in many builds in the Radio Controlled (RC) world and worked well when the robot was simple and fairly light. Now that it is well over 10kg the same RC style build doesn't necessarily still work. Foam compresses a bit to easily.

I have upgraded to 12 inch wheels with air tube tires. This jump seemed a bit risky, would the new setup overwhelm the robot? Once I modified the wheels and came up with an initial mounting scheme to test I think the 12 inch is closer to what the robot naturally wants to have. This should boost the maximum speed of the machine to around 20km/h which is probably as much as you might want on something autonomous. For example, if your robot can out run you things get interesting.




I had to get the wheels attached in order to work out clearances for the suspension upgrade. While the original suspension worked great for a robot that you only add 1-2kg to, with an itx case, two batteries, a fused power supply etc things seem to have added up to too much weight for the springs to counter.

I now have some new small 'coil overs' in hand which are taken from mini mountain bike suspension. They are too heavy for what I am using, with around 600lb/inch compression. I have in mind some places that use coil overs in between the RC ones and the push bike ones which I may end up using. Also with slightly higher travel distance.



As the photo reveals, I don't actually have the new suspension attached yet. I'm thinking about a setup based around two bearing mounts from sparkfun. I'd order from servocity but sfe has cheaper intl shipping :o Anyway, two bearing mounts at the top, two at the bottom and a steel shaft that is 8mm in the middle and 1/4 inch (6.35mm) on the edges. Creating the shafts like that, with the 8mm part just the right length will trap the shaft between the two bearing mounts for me. I might tack weld on either side of the coil over mounts so there is no side to side movement of the suspension.

Yes, hubs and clamping collars were by first thought for the build and would be nice, but a reasonable result for a manageable price is also a factor.

September 04, 2016 06:16 AM

August 04, 2016

Tim KentElectric bike build part 4

Continued from Electric bike build part 3.

The next stage of the build was fitting the additional sensors. The kit came with a wheel speed sensor, and as my bike has drop bars I optioned two HWBS (Hidden Wire Brake Sensor) devices.

Here's the wheel speed sensor and magnet fitted, talk about a monster magnet:


I decided to fit the brake sensors along the bars themselves by peeling back the bar tape a bit:


No where to be seen, and as an added bonus gives the bars quite an ergo feel:


I now had to plan where to mount the screen, throttle and controls. I also had to keep room for a headlight and Garmin bike computer. The biggest problem (which I had known all along) was the internal diameter of the throttle and controls (22.2mm) being too small for my drop bars.

Having access to a 3D printer, I designed some parts to mount these accessories.

The first part I designed was a spacer for the DPC-14 display so I could orient the screen on the bracket by 180 degrees. Some Bafang documentation suggests this is possible but on my screen with a charge port, the charge port wires get in the way.

Here are some pics of the screen with the spacer fitted, and with the fasteners replaced with longer ones to retain the same thread engagement:




You can download the model file from Thingiverse.

Continued at Electric bike build part 5.

August 04, 2016 11:43 AM

Tim KentElectric bike build part 5

Continued from Electric bike build part 4.

After about 6 revisions I finally had a workable design for mounting my accessories. I decided to design a mount in two parts that when brought together form a ring around the stem to allow a second "row" of stuff to be mounted.

Here's the final design:


It is all held together only by the accessories mounted to it, but it seems quite solid. Originally the top and bottom parts were identical, but I had to change to an offset design to mount the light higher. The larger lobe is to accomodate the headlight's mount which is designed for an oversized bar.

Here's everything bolted up and in place, I'm very happy with the result:


The throttle is easily within reach of my left thumb when not in the drops, and I can safely keep my right hand near the front brake at the same time. I also really like having the Bafang display quite far forward as it makes it always easy to see. The IPS display looks amazing even in direct sunlight:


I took the bike for a test ride and wasn't able to wipe the grin from my face! Talk about making cycling effortless!

Here's the bike fully completed, although disregard the low seat hight:


The 42/11-30 gearing seems to work quite well for my intended purpose of using this bike as a commuter.

I have a warning though; I have used the bike four times now and have done about 90km. In the last 10km I noticed a bit of a clicking noise when pedalling, it turns out the lock ring had become slightly lose. This surprised me as I used thread locker and applied the correct amount of torque to the lock ring, I assumed the ones having trouble weren't doing the install correctly. Today I re-tightened the lock ring to "epic tight" and will monitor it.

August 04, 2016 11:42 AM

Tim KentElectric bike build part 3

Continued from Electric bike build part 2.

I now had all the parts and tools available to fit the motor to the BB shell. I had read that the high torque from the motor could dent alloy frames so I picked up some Neoprene rubber to try and reduce the chance of this happening:


Rubber applied:


Test fit, the final fit will have the rubber fully compressed between the motor and frame:


Looks good!

One slight issue I had was the hole on this steel bracket being drilled slightly offset, meaning the fastener wasn't able to be fitted without binding. I drilled the hole 0.5mm larger then nail polished the exposed metal:


Thanks yet again to my Aldi bike toolkit I had the right tool on hand. I was able to (blue) Loctite then tighten at the same time as holding the motor against the frame fully compressing the rubber:


All done:


I then applied Loctite to the two additional bolts and the extra lockring (you can use a standard Shimano Hollowtech II tool) then fitted:


Here's a pic after I fitted both crankarms and chain:


It's starting to come together!


Continued at Electric bike build part 4.

August 04, 2016 10:52 AM


Last updated: September 24, 2017 03:30 PM. Contact Humbug Admin with problems.