HUMBUG logo


HUMBUGers

Feed: rss rdf opml

November 01, 2016

Ben MartinHoundbot progresses

All four new shocks are now fitted! The tires are still deflated so they look a little wobbly. I ended up using a pillow mount with a 1/4 inch channel below it. The pillow is bolted to the channel from below and the channel is then bolted from the sides through the alloy beams. The glory here is that the pillows will never come off. If the bolts start to vibrate loose they will hit the beam and be stopped. They can not force the pillow mount up to get more room because of the bolts securing the 1/4 inch channel to the alloy beams coming in from the sides.


I'm not overly happy with the motor hub mount to wheel connection which will be one of the next points of update. Hopefully soon I will have access to a cnc with a high power spindle and can machine some alloy crossover parts for the wheel assembly. It has been great to use a dual vice drill and other basic power and hand tools to make alloy things so far. But the powerful CNC will open the door to much 2.5D stuff using cheapish sheet alloy.

But for now, the houndbot is on the move again. No longer to the wheels just extend outward under load. Though I don't know if I want to test the 40km/h top speed without updating some of the mountings and making some bushings first.


November 01, 2016 10:08 AM

October 16, 2016

Clinton RoyIn Memory of Gary Curtis

This week we learnt of the sad passing of a long term regular attendee of Humbug, Gary Curtis. Gary was often early, and nearly always the last to leave.

One  of Gary’s prized possessions was his car, more specifically his LINUX number plate. Gary was very happy to be our official airport-conference shuttle for linux.conf.au keynote speakers in 2011 with this number plate.

Gary always had very strong opinions about how Humbug and our Humbug organised conferences should be run, but rarely took to running the events himself. It became a perennial joke at Humbug AGMs that we would always nominate Gary for positions, and he would always decline. Eventually we worked out that Humbug was one of the few times Gary wasn’t in charge of a group, and that was relaxing for him.

A topic that Gary always came back to was genealogy, especially the phone app he was working on.

A peculiar quirk of Humbug meetings is that they run on Saturday nights, and thus we often have meetings at the same time as Australian elections. Gary was always keen to keep up with the election on the night, often with interesting insights.

My most personal memory of Gary was our road trip after OSDC New Zealand, we did something like three days of driving around in a rental car, staying at hotels along the way. Gary’s driving did little to impress me, but he was certainly enjoying himself.

Gary will be missed.

 


Filed under: Uncategorized

October 16, 2016 04:39 AM

October 10, 2016

Tim KentInstalling Debian on the APU2

This is a short post detailing the install of Debian on the PC Engines APU2 using PXE.

First of all you'll need to ensure you are running version 160311 or newer BIOS. You can find the BIOS update details here. If the PXE options are missing then there's a good chance you aren't running a new enough BIOS!

Connect to the system's console via the serial port using a baud rate of 115,200. I typically use screen on Linux/macOS or PuTTY on Windows.

Start the APU2 and press Ctrl+B when prompted to enter iPXE, or choose iPXE from the boot selection menu (F10).

Attept boot from PXE using DHCP:
iPXE> autoboot

If all is well you will get to the "Debian GNU/Linux installer boot menu" heading, press TAB to edit the Install menu entry.

This should bring up something along the lines of:
> debian-installer/amd64/linux vga=788 initrd=debian-installer/amd64/initrd.gz --- quiet

You'll want to define the serial console by adding the console parameter to the end (and preseed parameter if used):
> debian-installer/amd64/linux vga=788 initrd=debian-installer/amd64/initrd.gz --- quiet console=ttyS0,115200

Press enter and you should be on your way!

October 10, 2016 09:06 AM

Tim KentPXE boot Debian using RouterOS as PXE server

I would typically use a Linux server for the purposes of PXE booting, but this is so straightforward it's a very attractive option. I'm using a MikroTik RB2011 (RouterOS v6.34.6) successfully.

This example assumes your router's LAN IP is 172.16.8.1 and the local subnet is 172.16.8.0/24.

First of all, download the netboot archive to a Linux machine (I'm using a Raspberry Pi here):
tim@raspberrypi /tmp $ wget http://ftp.au.debian.org/debian/dists/jessie/main/installer-amd64/current/images/netboot/netboot.tar.gz
tim@raspberrypi /tmp $ wget http://ftp.au.debian.org/debian/dists/jessie/main/installer-amd64/current/images/SHA256SUMS

Check that your archive matches the checksum file:
tim@raspberrypi /tmp $ grep `sha256sum netboot.tar.gz` SHA256SUMS
SHA256SUMS:460e2ed7db2d98edb09e5413ad72b71e3132a9628af01d793aaca90e7b317d46  ./netboot/netboot.tar.gz

Extract the archive to a tftp directory:
tim@raspberrypi /tmp $ mkdir tftp && tar xf netboot.tar.gz -C tftp

Copy tftp folder to the MikroTik:
tim@raspberrypi /tmp $ scp -r tftp admin-tim@172.16.8.1:

On the MikroTik, configure TFTP on MikroTik with a base directory of /tftp (omitting req-filename matches all):
[admin-tim@MikroTik] /ip tftp add ip-address=172.16.8.0/24 real-filename=tftp

Configure DHCP for PXE booting:
[admin-tim@MikroTik] /ip dhcp-server network set [ find address=172.16.8.0/24 ] boot-file-name=pxelinux.0 next-server=172.16.8.1

October 10, 2016 02:11 AM

September 19, 2016

Sarah Smith - Game BlogBurn your GANTT Charts & Deliver Game Releases Like a Boss

We all want to have our teams be treated as the awesome creative people they are, but there's deadlines and as producers & studio founders responsible for making sure the place stays afloat, how can we deliver our games on schedule and not put our peoples feet to the fire?



How about setting fire to your schedule for a damn start?  We all know the dates on those things are a straight-up fiction!  They weren't working anyway right?  The reason they fail is because of the Planning Fallacy - humans are fundamentally incapable of making accurate estimates of work to be done ahead of time.

There's alternative approaches where torching those old traditional MS Project style schedules may well be a good thing, especially if your studio is looking for a clean slate on your team dynamics.  My talk at Seattle's Mobile Game Forum on October 18th 2016 deals with this topic and I'm going to throw a few spoilers here (the image above is a slide from that talk) - but go see the talk if you can, as the full content will be there.

The different way of working is a methodology developed over a decade of working with creative teams.  I use it today at my studio Smithsoft Games.  At its core it's a "kind-of agile" modified for the small, distributed, start-up teams of frequent collaborators which typify my teams.
What works well for my team is ideas that are raided from the agile camp.  
I don't necessarily buy into the whole agile manifesto - because my teams & projects are usually distributed and multi-disciplinary so not always face-to-face.  We respond to players (our customers) but they are not around the table as we typically use metrics & the occasional field test session - so we don't have a customer figure handy as required by agile.

Instead I have found that what works well is to know the agile principles and use their best ideas as far as possible while working in a way that is resilient against breakdowns.  I call this "raiding agile".

Backlogs are the best idea ever, a great takeaway from the agile camp & the first and best step you can take along the road to delivering your game without crushing your people in a schedule hell is to switch to backlog driven development.

The basic approach we take (raiding agile) can be broken down into 3 parts:


To get your head around the approach we take, basically just use one simple mnemonic - it all comes down to people-power: free your people & empower them to solve the problem of planning your project; communicate constantly and excellently with your people and gamify the planning using backlogs so that every day your people are doing the planning work of keeping your project on track and having fun while they do it.

So how does it work in practice?

Unleash Your People


James Everett talks on "Trust in Game Design NZGDC 2016"
Unleashing your people is about trust.  Trust is a top item in the agile manifesto and something crucial to getting your team working.  Top designers & producers in game development already know this even if they don't use agile.

In the seminal 1999 book “Peopleware: Productive Projects and Teams” Di Marco and Lister explain that management’s job is not to make people work.  Instead your job is to make it possible for them to work.

As a leader, you work to build trust: work to be in a situation where the people who work on your team trust you, and more importantly you trust them.

Your success will be defined not by how carefully you lay your plans; but by how well you recover when they go wrong.  And you will go wrong.  When it happens, your team will be there; if you have built trust.

If you try to impose control from the top, it amounts to a lack of trust that people will work what’s needed for the project.  Having your people waiting around for sign-off, when they could be working is waste.

Having people get into what I call the “Inbox” mentality where they live their work lives like a rat in a cage pressing a bar, checking for the next item of work to be given to them by their supervisor.  That is not how lean effective teams work.  Companies that operate via their inbox, and pleasing bosses, lose the ability to respond quickly; they sacrifice agility and vibrancy for no actual gain.

Tool up your Communication


OK, so we’re going to TRUST our people, empower them to work on the project.  How do we as leaders make sure that the project is on track then?  Isn’t it our job to motivate them?

Imagine the throughput of your team is defined by this triangle - in this model the sides remain in constant proportion.


Once you have hired someone competence is fixed. Lets assume you have a team and you want to make the best of them.

Motivation - You could be forgiven for thinking that comes from being paid.  Some think in a dated way, an authoritarian way that motivation must be imposed from the top.  Let me tell you for creative people it comes from a raft of things, such as accomplishment, and recognition.

But in my experience its the communication which winds up being the restricting factor MOST often.  If you improve communication that you will begin to reach the potential of the motivated competent team.

Technical projects have failures - but those are due to human reasons.  Most often from a lack of shared understanding.  The best bang for your buck, for improving team performance, is time spent on communication.  And that means written, and verbal, charts on the wall, wiki pages - anything that gets a shared message across.

Your job as leader is to filter outside impacts that can derail the team, simplify the view they have of the overall product and provide a compass set on true north.  Provide a consistent drum-beat that gives the project a cadence.  Through that repeated message infect the team with your own enthusiasm for the mission of the immediate project goals.

“We are going to ship the beta in March” - keep talking about what that will mean, and what it will look like.  Check back to see if people understand what that goal is - when it bites, what counts as success.

Have GBC - a great big chart stuck physically to the wall that shows that goal.  If I walk up to one of your team members and ask what is the next big milestone, whats in it and when is it due?  It should roll off their tongue instantly - if not, then that is on you.

Be Precise & Specific

Systematize names for things and use them across the board.  Don’t use vague words like “the build” or worse “IT”.  Make sure your tools, your plans and your verbal communications always use the same terms.  People cannot see inside your head.
When will “IT” be ready?  
Is “IT” done yet?
 If you ask an artist working on a concept for a character when will “IT” be ready, does that mean the concept, or a set of poses, and clothes?  Use the milestone and feature names to avoid the “IT” problem.

Make Sure Communication is Two-Way

So you’re communicating clearly with your trusted team.  Is that communication two-way?  Trust comes when your team knows that they can tell you ANYTHING without fear.  That comes when you admit you don’t know.

Google’s 3 year long study Project Aristotle set out in 2012 to see what made the best teams.  Their leads had believed conventional wisdoms like the teams that did the best were the ones that had the most talented members.  But it had not been studied to find out how true that was.

Turns out that people needed Psychological Safety which is ‘‘a sense of confidence that the team will not embarrass, reject or punish someone for speaking up’’

As a leader you have to make sure that members of your team are rewarded for contributing their slice of wisdom.

Have you heard a conversation where there’s that one person who “knows more” than every one else?  Others should “shut up” because the authority is talking?

Knowledge is not a high-watermark.  Even if someone is still learning compared to an experienced person remember this: the contributions of even the most inexperienced member of your team is never submerged by another.  Team members contributions overlap. 

Make sure their slice of wisdom is heard.  They may have had the one vital clue for the success of a critical part of the project. 

Use Tools to Help Make This Happen

How can ensure that all team members are contributing in that communication two-way street?  Obvious ways are to show leadership in meetings.  Make sure your loud confident types cede the floor and don’t interrupt.  Try a round-table technique for your stand-ups and other essential meetings.  Use a “speakers totem” if necessary to shut up repeated interrupters.

Even better - and this is a nice segue into the next of the 3 tricks - consider using tools that gamify and level the playing field when it comes to meetings like the daily standup.  At Smithsoft we use the Slack on-line chat tool, with its scriptable bot, to manage our daily standup.  Wikis, source control, online-test-plans, and Drop Box all have their place too.  Trello and the Google suite of cloud-collaboration & document tools are also great.

The single best communication tool is working software.  An up-to-date version of the game, which everyone on the team can access.  Your tech team should prioritise what’s called “continuous integration” - which basically means automatically getting new builds when changes are made.

Team members who are remote or working remotely, or from home are not out of the loop.  Breakdowns & mismatches are avoided.

Yes - there’s value in face-to-face, or video conference hook-ups, but their usefulness has to be balanced with the time-wasting aspect.  Making people report in personally to a meeting like little tin-soldiers might give a feeling of control, but once you’ve dished out that sermon on the mount, how much of it actually sank in?  Meetings, and especially ones where the communication is mostly all one way is a number one time-waster that you can move away from as you pursue a lean and creative team methodology.

Gamify Your Planning

OK, you’ve unleashed your people and tooled up your communication.  Now you have everything buzzing, how to make sure you get the project delivered out of all that energy?

It turns out that there is one huge shift you can make with your project management; if you’re not doing this already, prioritize people over days.

They are not human resources.

At the beginning of a two week period, the team goes through one by one & makes a list of all the tasks they will commit to complete that period.  We call the period a “sprint”.  As far as possible teams should be multi-disciplinary with artists, technical artists, programmers, QA people and designers all working side by side on functional areas of the game.  This way they see & communicate issues in real time.

Team members only commit to what they know they can do, and they OWN that.  The whole team agrees that those tasks advance the mission of the project, by delivering value to your players.  Because you’ve communicated the mission to them, they know what success looks like, they know to measure each and every task against that vision to see if it is up to snuff; before adding it into the sprint.



The tasks then become cards in a game, played in rounds & turns.  This leads to ceremony around the working day which structures your teams work in a way that makes it obvious when the agreed upon plan is strayed away from.  The rules of the game help reinforce ways-of-working that stop the classic mistakes of human nature such as "The Planning Fallacy".

Well, you might now say, great - I’ve burned my GANTT chart and I don’t know what we’re delivering any more!

Actually you do.  Because you have used communications, with specific clear terminology, and you have working software, you know exactly what you’re delivering.  You see it every day.

Its the working software - the current build of the game.

And actually you know MORE - because you can look ahead and see where the trend line goes.  How many stories are we getting through?  This is the teams VELOCITY.  You can use a fancy graph like this one.  But the best thing is just to put a big chart on the wall and use a marker pen to keep it up to date.  Or just track the velocity as a number.  If you like you can break it down by team member.

How many tasks are left?  You know your delivery dates - simply draw a line across  your backlog at the cut off date and you can SEE what features are in and what don't make it.

https://github.com/sarah-j-smith/trelloburn


But I have to have my feature!  

OK, sure - time to horse trade, what other feature do you want to swap out so that the one you want in makes the cut?

Now you have a dialog that allows you to negotiate and still have a working game.

Closing:  People Before Tools

Beware of becoming seduced by tools.  A lot of managers I have worked with in the past will say things like "We're using Trello" or "We're using Jira" as though that completely describes the ways of working of the team or studio.

A burn-down chart like the one above is not your process.  If that starts to happen, go back to basics and use a number of a rough chart on the wall.  

The truth is not some graph produced by a tool - its what your people are producing and what the working software says.

Also I'm not saying this way is simple: its hard, and our teams still struggle every day to stay on target.  But at least by putting people first you as a leader, studio founder or producer are not trying to do all the planning with bogus numbers on a fictional GANTT chart, and you have help with planning from your team.

In closing I’d just like to summarise: you can beat the Planning Fallacy & other mistakes, by getting your team to divide a creative milestones into small, simple, same-sized, swappable stories stored into a prioritized backlog.  

Good luck and have fun!

September 19, 2016 06:34 AM

September 09, 2016

Ben MartinHoundbot suspension test fit

I now have a few crossover plates in the works to hold the upgraded suspension in place. See the front wheel of the robot on your right. The bottom side is held in place with a crossover to go from the beam to a 1/4 inch bearing mount. The high side uses one of the hub mount brackets which are a fairly thick alloy and four pretapped attachment blocks. To that I screw my newly minted alloy blocks which have a sequence of M8 sized holes in them. I was unsure of the final fit on the robot so made three holes to give me vertical variance to help set the suspension in the place that I want.



Notice that the high tensile M8 bolt attached to the top suspension is at a slight angle. In the end the top of the suspension will be between the two new alloy plates. But to do that I need to trim some waste from the plates, but to do that I needed to test mount to see where and what needs to be trimmed. I now have an idea of what to trim for a final test mount ☺.

Below is a close up view of the coil over showing the good clearance from the tire and wheel assembly and the black markings on the top plate giving an idea of the material that I will be removing so that the top tension nut on the suspension clears the plate.


 The mounting hole in the suspension is 8mm diameter. The bearing blocks are for 1/4 inch (~6.35mm) diameters. For test mounting I got some 1/4 inch threaded rod and hacked off about what was needed to get clear of both ends of the assembly. M8 nylock nuts on both sides provide a good first mounting for testing. The crossover plate that I made is secured to the beam by two bolts. At the moment the bearing block is held to the crossover by JB Weld only, I will likely use that to hold the piece and drill through both chunks of ally and bolt them together too. It's somewhat interesting how well these sorts of JB and threaded rod assemblies seem to work though. But a fracture in the adhesive at 20km/h when landing from a jump without a bolt fallback is asking for trouble.


The top mount is shown below. I originally had the shock around the other way, to give maximum clearance at the bottom so the tire didn't touch the shock. But with the bottom mount out this far I flipped the shock to give maximum clearance to the top mounting plates instead.


So now all I need is to cut down the top plates, drill bolt holes for the bearing to crossover plate at the bottom, sand the new bits smooth, and maybe I'll end up using the threaded rod at the bottom with some JB to soak up the difference from 1/4 inch to M8.

Oh, and another order to get the last handful of parts needed for the mounting.

September 09, 2016 02:48 AM

September 04, 2016

Ben MartinHoundbot rolling stock upgrade

After getting Terry the robot to navigate around inside with multiple Kinects as depth sensors I have now turned my attention to outdoor navigation using two cameras as sensors. The cameras are from a PS4 eye which I hacked to be able to connect to a normal machine. The robot originally used 5.4 inch wheels which were run with foam inside them. This sort of arrangement can be seen in many builds in the Radio Controlled (RC) world and worked well when the robot was simple and fairly light. Now that it is well over 10kg the same RC style build doesn't necessarily still work. Foam compresses a bit to easily.

I have upgraded to 12 inch wheels with air tube tires. This jump seemed a bit risky, would the new setup overwhelm the robot? Once I modified the wheels and came up with an initial mounting scheme to test I think the 12 inch is closer to what the robot naturally wants to have. This should boost the maximum speed of the machine to around 20km/h which is probably as much as you might want on something autonomous. For example, if your robot can out run you things get interesting.




I had to get the wheels attached in order to work out clearances for the suspension upgrade. While the original suspension worked great for a robot that you only add 1-2kg to, with an itx case, two batteries, a fused power supply etc things seem to have added up to too much weight for the springs to counter.

I now have some new small 'coil overs' in hand which are taken from mini mountain bike suspension. They are too heavy for what I am using, with around 600lb/inch compression. I have in mind some places that use coil overs in between the RC ones and the push bike ones which I may end up using. Also with slightly higher travel distance.



As the photo reveals, I don't actually have the new suspension attached yet. I'm thinking about a setup based around two bearing mounts from sparkfun. I'd order from servocity but sfe has cheaper intl shipping :o Anyway, two bearing mounts at the top, two at the bottom and a steel shaft that is 8mm in the middle and 1/4 inch (6.35mm) on the edges. Creating the shafts like that, with the 8mm part just the right length will trap the shaft between the two bearing mounts for me. I might tack weld on either side of the coil over mounts so there is no side to side movement of the suspension.

Yes, hubs and clamping collars were by first thought for the build and would be nice, but a reasonable result for a manageable price is also a factor.

September 04, 2016 06:16 AM

August 04, 2016

Tim KentElectric bike build part 4

Continued from Electric bike build part 3.

The next stage of the build was fitting the additional sensors. The kit came with a wheel speed sensor, and as my bike has drop bars I optioned two HWBS (Hidden Wire Brake Sensor) devices.

Here's the wheel speed sensor and magnet fitted, talk about a monster magnet:


I decided to fit the brake sensors along the bars themselves by peeling back the bar tape a bit:


No where to be seen, and as an added bonus gives the bars quite an ergo feel:


I now had to plan where to mount the screen, throttle and controls. I also had to keep room for a headlight and Garmin bike computer. The biggest problem (which I had known all along) was the internal diameter of the throttle and controls (22.2mm) being too small for my drop bars.

Having access to a 3D printer, I designed some parts to mount these accessories.

The first part I designed was a spacer for the DPC-14 display so I could orient the screen on the bracket by 180 degrees. Some Bafang documentation suggests this is possible but on my screen with a charge port, the charge port wires get in the way.

Here are some pics of the screen with the spacer fitted, and with the fasteners replaced with longer ones to retain the same thread engagement:




You can download the model file from Thingiverse.

Continued at Electric bike build part 5.

August 04, 2016 11:43 AM

Tim KentElectric bike build part 5

Continued from Electric bike build part 4.

After about 6 revisions I finally had a workable design for mounting my accessories. I decided to design a mount in two parts that when brought together form a ring around the stem to allow a second "row" of stuff to be mounted.

Here's the final design:


It is all held together only by the accessories mounted to it, but it seems quite solid. Originally the top and bottom parts were identical, but I had to change to an offset design to mount the light higher. The larger lobe is to accomodate the headlight's mount which is designed for an oversized bar.

Here's everything bolted up and in place, I'm very happy with the result:


The throttle is easily within reach of my left thumb when not in the drops, and I can safely keep my right hand near the front brake at the same time. I also really like having the Bafang display quite far forward as it makes it always easy to see. The IPS display looks amazing even in direct sunlight:


I took the bike for a test ride and wasn't able to wipe the grin from my face! Talk about making cycling effortless!

Here's the bike fully completed, although disregard the low seat hight:


The 42/11-30 gearing seems to work quite well for my intended purpose of using this bike as a commuter.

I have a warning though; I have used the bike four times now and have done about 90km. In the last 10km I noticed a bit of a clicking noise when pedalling, it turns out the lock ring had become slightly lose. This surprised me as I used thread locker and applied the correct amount of torque to the lock ring, I assumed the ones having trouble weren't doing the install correctly. Today I re-tightened the lock ring to "epic tight" and will monitor it.

August 04, 2016 11:42 AM

Tim KentElectric bike build part 3

Continued from Electric bike build part 2.

I now had all the parts and tools available to fit the motor to the BB shell. I had read that the high torque from the motor could dent alloy frames so I picked up some Neoprene rubber to try and reduce the chance of this happening:


Rubber applied:


Test fit, the final fit will have the rubber fully compressed between the motor and frame:


Looks good!

One slight issue I had was the hole on this steel bracket being drilled slightly offset, meaning the fastener wasn't able to be fitted without binding. I drilled the hole 0.5mm larger then nail polished the exposed metal:


Thanks yet again to my Aldi bike toolkit I had the right tool on hand. I was able to (blue) Loctite then tighten at the same time as holding the motor against the frame fully compressing the rubber:


All done:


I then applied Loctite to the two additional bolts and the extra lockring (you can use a standard Shimano Hollowtech II tool) then fitted:


Here's a pic after I fitted both crankarms and chain:


It's starting to come together!


Continued at Electric bike build part 4.

August 04, 2016 10:52 AM

August 03, 2016

Sarah Smith - Game BlogPandora's Books Coming Soon!

Lately all my time has been devoted to Game Development so I have not been posting much.  What have I been up to?

Since January 2016 with Space-Bot Alpha behind me, I expanded the studio and bet the farm on a new game Pandora's Books.

It's a game about Pandora who opens the wrong book and releases the chaos wizard & his minions onto the worlds of classic stories.  As a player your job is to unscrambled the mixed up words, to defeat the wizard & his minions, and save the cities of these timeless tales.  The game ships with War of the Worlds unlocked and as you play you collect clues that can be used to help when you're stuck on a word, and also saved up to unlock books.




The team has been amazing, and last week we pushed the game to Apple's App Store editorial team in the hope of getting a feature!  This is the "big time" for me, and I guess even if we don't get the much-vaunted Apple feature we still have a great game which will go live on August 18th.

If you want to follow my journey as an Indie Game Developer why not subscribe to follow this blog and keep up with future posts?

Thanks for reading!

August 03, 2016 05:44 AM

Sarah Smith - Game BlogSprite Kit for the Win

My game development work these days is all done in Apple's Sprite Kit, which is a dedicated 2D game development technology.  To my knowledge Apple is the only device vendor making game APIs like Sprite Kit.  Google has some back-end stuff but nothing to put sprites on the screen.

Sprite Kit is a "no engine solution".

The correct term for it is framework: Sprite Kit is not an "Engine".  You hear that as a common term around game dev circles: "what engine are you using?".  Sprite Kit is a "no engine solution".  In this article I lay out my reasons for choosing Sprite Kit for my professional studio's development, and what this distinction means in practice.

Editing a Sprite Kit Scene in Xcode

In game dev you try to separate code from assets, functionality from data, as you do in regular programming or good software engineering.  A key difference is the degree of separation and what role your game plays on the device when you use a framework rather than an engine.


What is it that we get from game engine makers, like GameMaker, Adventure Game Studio, Unity3D, Unreal Engine or CryEngine?  They ship pre-packaged functionality, such as "place that quad over there with this texture on it", and "apply that animation to this object node".  You provide a package of data, which includes visual assets such as images & models, and logic assets such as scripts, that the game engine loads and at various times to render and evaluate.

Framework eg Sprite Kit vs an Engine
What is the real-world implication of this?  Basically you can only do (easily) what the engine provides for.  If you write a script or make an object that invokes cool engine functionality you can move mountains and make magic.  But when you try to just make a button behave like you want, and the engine doesn't support that it becomes ridiculously painful.  Mountains of code or even an external plugin has to be used.

These attributes mean that engines are great for prototyping (as long as your prototype is a game that the engine can easily make) but things get hard quickly when you try to exert your will and design intent over the game; and also when you start to get closer to publishing and want to integrate platform API's for things like In-App Purchases.


Having said that engines like Unity are incredibly powerful and talented 3D programmers have produced a lot of common 3D rendering functionality into a tightly honed and well-documented beast which you can call on to get your game up and running quickly.  Of course you don't get this for free: a professional studio has to pay $1500 USD for each of Unity Pro, Unity iPhone and Unity Android - so a total bill of $4500 USD (or around $6000 AUD) per seat.  And then you pay that again when Unity goes up a major version number.  Its worth every penny - if you need that 3D functionality.

The essential trade-off is that particular engines are well-suited to producing a particular type of game.  In fact most engines already have a type of game in mind.  Unity is ideally suited to games where a 3D avatar in either 1st person or 3rd person is running around a world, followed by a camera.

You can of course fix that camera, get rid of the avatar, and place a 2D orthographic scene in the view to thereby construct a 2D game.  This is kind of like using a Mack truck as a garden shed: you've got all this powerful 3D architecture under the hood, and your fixing most of it in place so that you just use the 2D side of it.  But as mobile devices get bigger and more powerful, and include 3D rendering hardware it turns out you can get away with doing this.

As a programmer however, you can do away with all this extra baggage and enjoy the following benefits:

What are the drawbacks:
One of the difficulties with Sprite Kit is that many artists, animators and other creative professionals who are not programmers do not have experience with it yet, and thus find it hard to directly contribute their assets to it, and work with the tool set.

However I think this will change as Apple has a huge commitment to making coding with Swift and Xcode more open and available to the wider creative community.

The Sprite Kit route was a no-brainer for me as I have a career as a software engineer behind me in my game development.  Also I had my previous experience with Cocos2D which has a very similar API.  I don't need the features that Unity excels in such as 3D.  As a founder of a studio I don't want our IP saddled permanently with the licensing and lock-in hassles of being welded to Unity3D - which is a proprietary technology.  Unity also likes to monitor your games so that it can tell if you're adhering to the licensing conditions and it requires you to notify your players of this.  Basically for me its about having control over my game and my IP.

Swift, and pretty much all the Xcode & Apple toolchain is Open Source and Xcode itself is at the end of the day just an IDE, so there is nothing stopping me editing my code in JetBrains AppCode or other tools.  I could edit my assets in other tools such as for example Texture Packer, and TileEd.

Want to give Sprite Kit a serious try for your studio's next game?  I strongly suggest going and buying Ray Wenderlich's PDF book "2D iOS & tvOS Games by Tutorials".  They update their stuff and you get the updates for free, and despite the title its not just a starter book, it teaches you 90% of what you need to get going, and gives a huge bunch of sample code as well.

Your mileage may vary.  But have a look at the pros and cons, and especially try to think of the longer game if you're a professional studio, or consider that you might want to be a buy out option so that you can at some point exit your studio.  Due diligence and buy-out exits are much easier when you have the fewest possible licensing hassles linked to your IP.

I hope this article helps, and if you share some of the same aspects that I do when it comes to game development you can be clearer about your choice of engine or framework.

Thanks for reading!

August 03, 2016 05:32 AM

Sarah SmithMy Studio Hits the Big Time

Lately all my time has been devoted to Game Development and setting up Smithsoft Games.

Since January 2016 with Space-Bot Alpha behind me, I expanded the studio and bet the farm on a new game Pandora's Books.

It's a game about Pandora who opens the wrong book and releases the chaos wizard & his minions onto the worlds of classic stories.  As a player your job is to unscrambled the mixed up words, to defeat the wizard & his minions, and save the cities of these timeless tales.  The game ships with War of the Worlds unlocked and as you play you collect clues that can be used to help when you're stuck on a word, and also saved up to unlock books.



The team has been amazing, and last week we pushed the game to Apple's App Store editorial team in the hope of getting a feature!  This is the "big time" for me, and I guess even if we don't get the much-vaunted Apple feature we still have a great game which will go live on August 18th.

If you want to follow my journey as an Indie Game Developer check my dev blog for more details.

August 03, 2016 02:09 AM

July 25, 2016

Ben Fowler"How do I make this hard to misuse?"

If you think about it, there is a continuum of user-friendliness when using software, either through a GUI, or programmatically via an API. It makes sense to talk about software as being easy to use; it also makes sense to look at it from the the other side: how hard is it to misuse. Paul 'rusty' Russell summed it up beautifully here: How do I make this hard to misuse?. His ease-of-use

July 25, 2016 12:45 PM

July 23, 2016

Tim KentElectric bike build part 2

Continued from Electric bike build part 1.

The tool now worked fine to remove the bottom bracket:


The shell looks really clean already:


Here's the next snag, the bracket for the shifter cables gets in the way of the BBS02:


After a bit of thinking, I came up with the idea of using some cable outer tube and a cable clamp. Here are the clamps I bought:


Looks promising:


Bolts up fine:


Yes! It worked:


Time will tell if this is a reliable solution, but it looks pretty good to me. I'll probably zip tie the cable outer to the frame at each side with a gentle radius.

Continued at Electric bike build part 3.

July 23, 2016 06:24 AM

July 21, 2016

Tim KentElectric bike build part 1

I have been collecting the bits to put together my first electric bike. It's still a work in progress so this will be my build diary. The objective is a reliable and quick road/bikeway commuter to make biking to work a more attractive option!

My first purchase was a used 2008 Giant OCR complete with weathered chain and perishing tyres:


At first I thought the rims were worn but it is just some surface corrosion, the bike has barely done any mileage. Importantly the frame, brakes, wheels and RD/shifters are all working well. I pretended the bike was a CX bike and took it on some trails near my house to test everything was working as intended, it all seems quite solid. The standard RD-2200 rear derailleur even worked smoothly with a 11-32 cassette!

The next step was to make a cardboard template to ensure the battery pack I intended to buy would fit, a shame both drink holders are no longer usable but looks good otherwise:


I also checked the distance between the BB shell and the outside of the frame (quite small, so no problems here), and the type of BB shell (68mm English). Everything checks out, time to pull that trigger on the order!

I bought the BBS02 from Paul and his team at EM3EV, my only comment is they had a shipping issue with the battery using TNT but everything was sorted out quickly by them using an alternate carrier at no cost to me. Big thumbs up for service from EM3EV, and their workmanship on the crimping/heatshrink looks great. Here are the fun bits:



Armed with my $20 Aldi bike toolkit, I started stripping down the bike and removing the crank, the tools held up fine for this step:


Unfortunately when I went to remove the bottom bracket cup I found the tolerances on the tool weren't good enough and some splines were too fat:


Fortunately I was able to tidy these up with an angle grinder:


I have done a test fit of the tool and it engages fine now, but I haven't gotten any further. I'll try to document the rest of the build as I go. I hope you find this build as interesting as I do!

Continued at Electric bike build part 2.

July 21, 2016 11:26 PM

July 16, 2016

Ben MartinMaking surface mount pcbs with a CNC machine

The cool kidsTM like to use toaster ovens with thermocouples to bake their own surface mount boards at home. I've been exploring doing that using boards that I make on a CNC locally. The joy of designing in the morning and having the working product in the evening. It seems SOIC size is ok, but smaller SMT IC packages currently present an issue. This gives interesting fodder for how to increase precision down further. Doing SOIC and SMD LED/Resistors from a sub $1k CNC machine isn't too bad though IMHO. And unlike other pcb specific CNC machines I can also cut wood and metal with my machine :-p

<iframe allowfullscreen="" frameborder="0" height="360" src="https://www.youtube.com/embed/gcfkqtgO0xA" width="640"></iframe>
Time to stock up on some SOIC microcontrollers for some full board productions. It will be very interesting to see if I can do an SMD usb connector. Makes it a nice complete black box to do something and talk ROS over USB.

July 16, 2016 08:53 AM

July 12, 2016

Paul GearonFaux Pas

The conference I attended last week was a pleasant excursion away from the confines of telecommuting. Other than the technology and systems presented, I particularly enjoyed meeting new people and catching up with friends. I think it really helped me focus again, both on work and on my personal projects.

That said, I disappointed myself with a conversation I had with one woman at the conference. She had released some software that is a brilliant example of the kind of systems that I am interested in learning more about, and I am looking forward to investigating how it all fits together. So when I saw her passing by I took the opportunity to introduce myself and thank her.

After the initial part of the conversation, I asked where she had traveled from, and she replied, "London," though she had originally come from elsewhere. I have other friends who live in and around London, and the cost of living in that city always seems prohibitive to me, so I asked about this. The response was that it wasn't so bad for her, and that she thought it was a nice place to settle down.

Without thinking, I took my cue from the phrase "settle down" and asked if she was married. She was very nice and replied no, but it was clear I had made a mistake and I allowed her to leave the conversation soon after that.

My initial reaction after this was to be defensive. After all, "settle down" is a phrase often associated with one's family, and many of us at the conference were married, so it didn't seem that harmful. But that it just one of those psychological tricks we pull on ourselves in order to not feel bad about our mistakes.

The fact is that this was a young woman on her own in a conference that was predominantly male. A question like this may be innocent enough in other circumstances, but as my wife pointed out, it can be threatening in an environment with such a strong gender disparity.

The tech industry has a major problem with its gender imbalance, and those women who remain are often treated so poorly that many of them choose to leave. I am particularly galled at my actions because I want to be part of the solution, and not perpetuating this unhealthy state of affairs. My actions were inadvertent, and not on a level of some of the behavior I have heard of, but if we want to see things change then it puts the onus on those of us who are in the majority to make things better. When it comes to the constant difficulties women and minorities face in the tech community, that means trying to improve things down to the subtlest level, as they all accumulate for individuals until they decide they don't want to be a part of it any more. And if that were to happen, we would be much poorer for it. Diversity breeds success.

I'm writing this as a reminder to myself to do better in the future. I'm also thinking that it may add to the voices of those who want to see things change, so those who haven't thought about it yet will give it some consideration, and those who have will be heartened by someone else trying... even when they get it wrong.

By following the conference on twitter, I saw that this woman went on to enjoy the rest of the conference, and I hope that my own mistake is something that she was able to quickly forget. If she ever reads this (unlikely), then I wholeheartedly apologise. Personally, I hope that this lesson stays with me a long time, and I always remember to make a more conscious effort about where I take the conversation in future.

July 12, 2016 09:05 PM

June 14, 2016

Ben MartinTerry & ROS

After a number of adventures I finally got a ROS stack setup so that move_base, amcl, and my robot base all like each other well enough for navigation to function. Luckily I added some structural support to the physical base as the self driving control is a little snappier than I normally tend to drive the robot by hand.

There was an upgrade from Indigo to Kinetic in the mix and the coupled update to Ubuntu Xenial to match the ROS platform update. I found a bunch of ROS packages that I used are not currently available for Kinetic, so had an expanding catkin ws for self compiled system packages to complete the update. Really cool stuff like rosserial wasn't available. Then I found that a timeout there caused a bunch of error messages about mismatched read sizes. I downgrade to the indigo version of rosserial and the error was still there, so I assume it relates to the various serial drivers in the Linux kernel doing different timing than they did before. Still, one would have hoped that rosserial was more resilient to multiple partial packet delivery. But with a timeout bump all works again. FWIW I've seen similar in boost, you try to read 60 bytes and get 43 then need to get that remaining 17 and stuff the excess in a readback buffer for the next packet read attempt. The boost one hit me going from 6 to 10 channel io to a rc receiver-to-uart arduino I created. The "joy" of low level io.

I found that the issues stopping navigation from working for me out of the box on Indigo were still there in Kinetic.  So I now have a very cool bit of knowledge to tell if somebody has navigation working or is just assuming that what one reads equals what will work out of the box.

Probably the next ROS thing will be trying to get a moveit stack for the mearm. I've got one of these cut and so will soon have it built. It seems like an ideal thing to work on MoveIt for because its a simple low cost arm that anybody can cut out and servo up. I've long wanted a simple tutorial on MoveIt for affordable arms. It might be that I'm the one writing that tutorial rather than just reading it.

Video and other goodness to follow. As usual, persistence it the key^TM.

June 14, 2016 12:35 PM

June 09, 2016

Ben Martinlibferris 2.0

A new libferris is coming. For a while I've been chipping away at porting libferris and it's tree over to using boost instead of the loki and sigc++ libraries. This has been a little difficult in that it is a major undertaking and that you need to get it working or things segv in wonderful ways.

Luckily there are tests for things like stldb4 so I could see that things were in decent shape along the way. I have also started to bring back the dejagnu test suite for libferris into the main tree. This has given me some degree of happiness that libferris is working ok with the new boost port.

As part of that I've been working on allowing libferris to store it's settings in a configurable location. It's a chicken and egg problem how to set that configuration, as you need to be able to load a configuration in order to be able to set the setting. At the moment it is using an environment variable. I think I'll expand that to allow a longer list of default locations to be searched. So for example on OSX libferris can check /Applications/libferris.app/whatever as a fallback so you can just install and run the ferris suite without any need to do move setup than a simple drag and drop.

For those interested, this is all pushed up to github so you can grab and use right now. Once I have expanded the test suite more I will likely make an announced 2.0 release with tarballs and possibly deb/rpm/dmg distributions.

New filesystems that I've had planned are for mounting MQTT, ROS, and YAML.

June 09, 2016 05:32 AM

June 06, 2016

Clinton RoySoftware Carpentry

Today I taught my first Software Carpentry talk, specifically the Intro to Shell. By most accounts it went well.

After going through the course today I think I’ve spotted two issues that I’ll try to fix upstream.

Firstly, command substitution is a concept that is covered, and used incorrectly IMO. Command substitution is fine when you know you’re only going to get back one value, e.g. running an identify on an image to get its dimensions. But when you’re getting back an arbitrary long list of files, you’re only option is to use xargs. Using xargs also means that we can drop another concept to teach.

The other thing that Isn’t covered, but I think should be, is reverse isearch of the history buffer, it’s something that I use in my day to day use of the shell, not quite as much as tab completion, but it’s certainly up there.

A third, minor issue that I need to check, but I don’t think brace expansion was shown in the loop example. I think this should be added, as the example I ended up using showed looping over strings, numbers and file globs, which is everything you ever really end up using.

Software Carpentry uses different coloured sticky notes attached to learners laptops to indicate how they’re going. It’s really useful as a presenter out the front, if there’s a sea of green you’re good to go, if there are a few reds with helpers you’re probably OK to continue, but if there’s too many reds, it’s time to stop and fix the problem. At the end of the session we ask people to give feedback, here for posterity:

Red (bad):

Orange(not so bad):

Green(good):

 

 


Filed under: Uncategorized

June 06, 2016 12:31 PM

May 10, 2016

Ben MartinThrough hole PCB Making -- Same Day

I initially thought that removing the multiple week wait for a board would be the true joy of making PCBs locally. It turns out that quick iteration is the best part. Version 2 and 3 of a board flows quickly and you end up with something unexpected after only a few days of tinkering.

I'm still at the level of making through hole stuff. Hopefully I can refine the process to allow some of the larger SMT stuff too. Throwing some caps, leds, resistors, dc jacks, and regulators on for a first cook round will cut down on the soldering phase.



My hello world PCB was an ESP8266 carrier with an mcp23017 muxer and a bunch of buttons. This is an MQTT emission device which I will be using to assist in the controlling of a 3d printer. While web interfaces are flexible, some tend to put buttons too close and you can fairly easily crash the bed by clicking down instead of up in some cases.






Today's iteration is an esp8266 breadboarder. This allows 3v3 intake, has a TTL serial header (on the left) and a resistor + led combo on pin 14 for blink testing. The button at top right toggles into flashing mode and the bottom of the board breaks out 7 gpios onto the breadboard. The 3v3 and ground also have a header under the hot glue to the power rails on the breadboard. Very handy for testing a breadboard layout before designing the next PCB to have an ESP8266 pressed into it.



The breadboard side needs a little trimming back. Turns out the older breadboard I used to measure was wider than this one :o

May 10, 2016 08:58 AM

April 18, 2016

Ben MartinMaking PCB with a hobby CNC machine

One of the main goals I had in mind when getting a CNC "engraving" machine was to make PCB at home. It's sort of full circle to the '70s I guess. Only instead of using nasty chemicals I just have the engraver scratch off an isolation path between traces. Or so the plan goes.


My "hello world" board is the above controller for a 3d printer. This is a follow up to the similar board I made to help use the CNC itself. For a 3d printer I added buttons to set Z=0.1 height and a higher Z height to aid in homing. The breakout headers on the bottom right are for the ESP8266 daughter board. The middle chip is an MCP32017 gpio extender. I've had good experiences using TWI on the ESP8266 and the MCP overcomes the pin limitations quite nicely. It also gives all the buttons a nice central place to go :)

The 3v3 regulator makes the whole show a plug in the AA pack and go type board. The on/off switch is the physical connection to an external battery.

One step closer to the design in the morning, physically create in the afternoon, and use in the evening goal.

April 18, 2016 08:25 AM

April 12, 2016

Blue HackersExplainer: what’s the link between insomnia and mental illness?

April 12, 2016 01:21 AM

April 02, 2016

Blue HackersOSMI Mental Health in Tech Survey 2016

April 02, 2016 02:31 AM

March 16, 2016

Blue HackersJust made a bad decision? Perhaps anxiety is to blame

http://medicalxpress.com/news/2016-03-bad-decision-anxiety-blame.html

Most people experience anxiety in their lives. For some, it is just a bad, passing feeling, but, for many, anxiety rules their day-to-day lives, even to the point of taking over the decisions they make.

Scientists at the University of Pittsburgh have discovered a mechanism for how anxiety may disrupt decision making. In a study published in The Journal of Neuroscience, they report that anxiety disengages a region of the brain called the prefrontal cortex (PFC), which is critical for flexible decision making. By monitoring the activity of neurons in the PFC while anxious rats had to make decisions about how to get a reward, the scientists made two observations. First, anxiety leads to bad decisions when there are conflicting distractors present. Second, bad decisions under anxiety involve numbing of PFC neurons.

March 16, 2016 12:11 AM

March 04, 2016

Ben FowlerContract-first RESTful API development with Spring

At work, my architect gave me an interesting task of building a public API on our application. Fairly straightforward as far as development tasks go of course, but this time, I was asked to take a contract-first approach. Write the schema first in a text editor, treat that as the golden source of truth, and then build a RESTful API to adhere to it. We chose Swagger 2. A general observation:

March 04, 2016 01:01 AM

February 26, 2016

Tim KentFluke 17B+ multimeter review

This is just a quick and honest review of my Fluke 17B+ multimeter.

Pros:


Cons:


The included TL75 test leads seem sufficient and seem to be of a reasonable quality.



As a hobbyist, this multimeter should serve me well. If you need more precision or you do AC measurements you might want to consider something like the Brymen BM257S.

I had assumed the 17B+ would have featured auto hold, so I was a bit disappointed about that (in fact it probably would have swayed me toward the BM257S) but I'm certainly not disappointed in the overall product.


February 26, 2016 10:38 PM

Tim KentFirst play with the BitScope Micro

I picked up a BitScope Micro to teach myself a bit about oscilloscopes. So far I've only used the first analog channel to monitor the wave generator, but it seems to do what it says on the box:


The test leads aren't the best quality, but otherwise this looks like a useful tool. You can't expect it to be in the same league as the professional gear, but for the money it's great for learning.


February 26, 2016 10:38 PM

Tim KentAldi 3D printer

I picked up an Aldi "Cocoon Create" 3D printer this morning and spent a bit of time trying it out. It appears to be a rebranded Wanhao I3:


The unit was packed really well and included a few goodies including a really comprehensive printed manual:


Putting the unit together was really simple, here's the unit after following the quick start guide:




It's using Jinsanshi Motor stepper motors, they seem to be fairly quiet:


The earth pin is connected:


Here's a photo I took during bed levelling:


This cord was pinching a bit, so I pulled it upward to resolve the issue:


First print:


All done:


Improving the machine already:



I'm really quite impressed with this machine especially given the price point. I was up and away printing a job within 15 minutes of opening the box.

The only thing I can whine about is the power supply fan which is unnecessarily loud. You certainly wouldn't want to have the thing running all day in the same room in an office environment.

February 26, 2016 10:38 PM

February 21, 2016

Clinton RoySouth Coast Track

I’m planning on doing this walk before linux.conf.au 2017. I’m interested in a couple of experienced hikers to join me. It starts at Melaleuca and finishes at Cockle creek.

<iframe frameborder="0" height="450" marginheight="0" marginwidth="0" scrolling="no" src="https://www.google.com/maps/embed?pb=!1m28!1m12!1m3!1d185317.5636110323!2d146.3644057152049!3d-43.46922022236897!2m3!1f0!2f0!3f0!3m2!1i1024!2i768!4f13.1!4m13!3e2!4m5!1s0xaa694b2fc9fde76f:0x1f60d21664b16e3!2sMelaleuca TAS 7001!3m2!1d-43.4219781!2d146.16299229999998!4m5!1s0xaa6c0e9e00c28065:0x5fdf6db3819e95de!2sCockle Creek Rd, Recherche TAS 7109!3m2!1d-43.5553528!2d146.8842032!5e0!3m2!1sen!2sau!4v1456030469240" width="600"></iframe>

Important Points

This is a reasonably serious hike, you need to have plenty of multi day hike experience to join me.

Todo:

Links:


Filed under: Uncategorized

February 21, 2016 04:15 AM

February 18, 2016

Anthony TownsBitcoin Fees vs Supply and Demand

Continuing from my previous post on historical Bitcoin fees… Obviously history is fun and all, but it’s safe to say that working out what’s going on now is usually far more interesting and useful. But what’s going on now is… complicated.

First, as was established in the previous post, most transactions are still paying 0.1 mBTC in fees (or 0.1 mBTC per kilobyte, rounded up to the next kilobyte).

fpb-10k-txns

Again, as established in the previous post, that’s a fairly naive approach: miners will fill blocks with the smallest transactions that pay the highest fees, so if you pay 0.1 mBTC for a small transaction, that will go in quickly, but if you pay 0.1 mBTC for a large transaction, it might not be included in the blockchain at all.

It’s essentially like going to a petrol station and trying to pay a flat $30 to fill up, rather than per litre (or per gallon); if you’re riding a scooter, you’re probably over paying; if you’re driving an SUV, nobody will want anything to do with you. Pay per litre, however, and you’ll happily get your tank filled, no matter what gets you around.

But back in the bitcoin world, while miners have been using the per-byte approach since around July 2012, as far as I can tell, users haven’t really even had the option of calculating fees in the same way prior to early 2015, with the release of Bitcoin Core 0.10.0. Further, that release didn’t just change the default fees to be per-byte rather than (essentially) per-transaction; it also dynamically adjusted the per-byte rate based on market conditions — providing an estimate of what fee is likely to be necessary to get a confirmation within a few blocks (under an hour), or within ten or twenty blocks (two to four hours).

There are a few sites around that make these estimates available without having to run Bitcoin Core yourself, such as bitcoinfees.21.co, or bitcoinfees.guthub.io. The latter has a nice graph of recent fee rates:

bitcoinfees-github

You can see from this graph that the estimated fee rates vary over time, both in the peak fee to get a transaction confirmed as quickly as possible, and in how much cheaper it might be if you’re willing to wait.

Of course, that just indicates what you “should” be paying, not what people actually are paying. But since the blockchain is a public ledger, it’s at least possible to sift through the historical record. Rusty already did this, of course, but I think there’s a bit more to discover. There’s three ways in which I’m doing things differently to Rusty’s approach: (a) I’m using quantiles instead of an average, (b) I’m separating out transactions that pay a flat 0.1 mBTC, (c) I’m analysing a few different transaction sizes separately.

To go into that in a little more detail:

The following set of graphs take this approach, with each transaction size presented as a separate graph. Each graph breaks the relevant transactions into sixths, selecting the sextiles separating each sixth — each sextile is then smoothed over a 2 week period to make it a bit easier to see.

fpb-by-sizes

We can make a few observations from this (click the graph to see it at full size):

As foreshadowed, we can redo those graphs with transactions paying one of the standard fees (ie exactly 0.1 mBTC, 0.01 mBTC, 0.2 mBTC, 0.5 mBTC, 1m BTC, or 10 mBTC) removed:

fpb-by-sizes-nonstd

As before, we can make a few observations from these graphs:

At this point, it’s probably a good idea to check that we’re not looking at just a handful of transactions when we remove those paying standard 0.1 mBTC fees. Graphing the number of transactions per day of each type (ie, total transactions, 220 byte transactions (1-input, 2-output), 370 byte transactions (2-input, 2-output), 520 byte transactions (3-input, 2-output), and 1kB transactions shows that they all increased over the course of the year, and that there are far more small transactions than large ones. Note that the top-left graph has a linear y-axis; while the other graphs use a logarithmic y-axis — so that each step in the vertical indicates a ten-times increase in number of transactions per day. No smoothing/averaging has been applied.

fpb-number-txns

We can see from this that by and large the number of transactions of each type have been increasing, and that the proportion of transactions paying something other than the standard fees has been increasing. However it’s also worth noting that the proportion of 3-input transactions using non-standard fees actually decreased in November — which likely indicates that many users (or the maintainers of wallet software used by many users) had simply increased the default fee temporarily while concerned about the stress test, and reverted to defaults when the concern subsided, rather than using a wallet that estimates fees dynamically. In any event, by November 2015, we have at least about a thousand transactions per day at each size, even after excluding standard fees.

If we focus on the sextiles that roughly converge to the trend line we used earlier, we can, in fact make a very interesting observation: after November 2015, there is significant harmonisation on fee levels across different transaction sizes, and that harmonisation remains fairly steady even as the fee level changes dynamically over time:

fpb-fee-market

Observations this time?

Along with the trend line, I’ve added four grey, horizontal guide lines on those graphs; one at each of the standard fee rates for the transaction sizes we’re considering (0.1 mBTC/kB for 1000 byte transactions, 0.19 mBTC/kB for 520 byte transactions, 0.27 mBTC/kB for 370 byte transactions, and 0.45 mBTC/kB for 220 byte transactions).

An interesting fact to observe is that when the market rate goes above any of the grey dashed lines, then transactions of the corresponding size that just pay the standard 0.1 mBTC fee become now less profitable to mine than transactions that pay the fees at the market rate. In a very particular sense this will induce a “fee event”, of the type mentioned earlier. That is, with the fee rate above 0.1 mBTC/kB, transactions of around 1000 bytes that pay 0.1 mBTC will generally suffer delays. Following the graph, for the transactions we’re looking at there have already been two such events — a fee event in July 2015, where 1000 byte transactions paying standard fees began getting delayed regularly due to the market fees began exceeding 0.1 mBTC/kB (ie, the 0.1 mBTC fee divided by 1 kB transaction size); and following that a second fee event during November impacting 3-input, 2-output transactions, due to market fees exceeding 0.19 mBTC/kB (ie, 0.1 mBTC divided by 0.52 kB). Per the graph, a few of the trend lines are lingering around 0.27 mBTC/kB, indicating a third fee event is approaching, where 370 byte transactions (ie 2-input, 2-output) paying standard fees will start to suffer delayed confirmations.

However the grey lines can also be considered as providing “resistance” to fee increases — for the market rate to go above 0.27 mBTC/kB, there must be more transactions attempting to pay the market rate than there were 2-input, 2-output transactions paying 0.1 mBTC. And there were a lot of those — tens of thousands — which means market fees will only be able to increase with higher adoption of software that calculates fees using dynamic estimates.

It’s not clear to me why fees harmonised so effectively as of November; my best guess is that it’s just the result of gradually increasing adoption, accentuated by my choice of quantiles to look at, along with averaging those results over a fortnight. At any rate, most of the interesting activity seems to have taken place around August:

Of course, obviously many wallets still don’t do per-byte, dynamic fees as far as I can tell:

Summary

February 18, 2016 09:40 AM

February 08, 2016

Ben FowlerSwagger sucks

I am surprised at how much of a step backwards JSON-based web services using "standards" like Swagger are, compared to SOAP. The tooling (especially Swagger-anything) is amateur-hour, dilettante crap, written by teenagers with short attention spans, in shoddy JS (or working Java that compiles, if you're lucky), to a shoddy, incomplete specification which nobody bothers to adhere to, or implement

February 08, 2016 10:19 PM

Ben FowlerSwagger is garbage

I am shocked at how much of a step backwards JSON-based web services are, compared to SOAP. The tooling (especially Swagger-anything) is unadulterated, amateur-hour, dilettante garbage, written by teenagers with short attention spans, in shoddy JS (or working Java that compiles, if you're lucky), to a shoddy, incomplete specification which nobody bothers to adhere to, or implement correctly or

February 08, 2016 09:52 PM

January 28, 2016

Ben MartinCNC Control with MQTT

I recently upgraded a 3040 CNC machine by replacing the parallel port driven driver board with a smoothieboard. This runs a 100Mhz Cortex-M mcu and has USB and ethernet interfaces, much more modern. This all lead me to coming up with a new controller to move the cutting head, all without needing to update the controller box or recompile or touch the smoothieboard firmware.



I built a small controller box with 12 buttons on it and shoved an esp8266 into that box with a MCP23017 chip to allow access to 16 gpio over TWI from the esp mcu. The firmware is fairly simple on the esp, it enables the internal pull ups on all gpio pins on the 23017 chip and sends an MQTT message when each button is pressed and released. The time since MCU boot in milliseconds is sent as the MQTT payload. This way, one can work out if this is a short or longer button press and move the cutting head a proportional distance.

The web interface for smoothie provides a pronterface like interface for manipulating where the cutting head is on the board and the height it is at. So lucky that it's open source firmware so I can see the non obfuscated javascript that the web interface uses. Then work out the correct POST method to send gcode commands directly to the smoothieboard on the CNC.

The interesting design here is using software on the server to make the controller box meet the smoothieboard. On the server MQTT messages are turned into POST requests using mqtt-launcher. The massive benefit here is that I can change what each button does on the CNC without needing to reprogram the controller or modify the cnc firmware. Just change the mqtt-launcher config file and all is well. So far MQTT is the best "IoT" tech I've had the privilege to use.



I'll probably build another controller for controlling 3d printers. Although most 3d printers just home each axis there is sometimes some pesky commands that must be run at startup, to help home z-axis for example. Having physical buttons to move the x axis down by 4mm, 1mm and 0.1mm makes it so much less likely to fat finger the web interface and accidentally crash the bed by initiating a larger z-axis movement than one had hoped for.

January 28, 2016 10:22 AM

January 23, 2016

Tim KentModifying the (Aldi) Lumina coffee grinder to produce a finer grind

I purchased one of the Lumina coffee grinders from Aldi a few years ago, but only just tried to use it for the first time recently. Even on the finest setting it was producing grinds that were very coarse. The product really should be marketed as a herb grinder in standard form.


As this machine uses burrs to do the grinding I thought I'd try shimming one of the burrs to bring the two closer together, hopefully resulting in a finer grind. This actually ended up working really well, and I've been using it at least three times a week for a few months now for all my coffee grinding duties!

Here's what you will need to do the modification:


First of all you'll want to remove the top burr assembly, which you can do by turning it clockwise:


You can see the tabs that lock it into place here:


Once you've removed it, flip it upside down and you should be able to see three Phillips head screws, you'll want to remove these:


Lift the metal burr away from the rest of the plastic assembly and you will then be able to fit the three 8x1mm washers as per the picture:


As you can see this has shimmed the burr by 1mm:


Fit the burr back onto the assembly and fasten the screws, don't go overboard with force though as they are just screwing into plastic.

Adjust the machine to the coarse setting and insert the top burr assembly, lock it into place by turning anti-clockwise.

Test out some grinding!

January 23, 2016 04:21 AM

Blue HackersScience on High IQ, Empathy and Social Anxiety | Feelguide.com

http://www.feelguide.com/2015/04/22/science-links-anxiety-to-high-iqs-sentinel-intelligence-social-anxiety-to-very-rare-psychic-gift/

Although Western medicine has radically transformed our world for the better, and given rise to some of the most remarkable breakthroughs in human history, in some ways it is still scratching at the lower slopes of the bigger picture. Only recently have our health systems begun to embrace the healing power of some ancient Eastern traditions such as meditation, for example. But overall, nowhere across the human health spectrum is Western medicine more unknowledgeable than in the realm of mental health. The human brain is the most complex biological machine in the known Universe, and our understanding of its inner workings is made all the more challenging when we factor in the symbiotic relationship of the mind-body connection.

When it comes to the wide range of diagnoses in the mental health spectrum, anxiety is the most common — affecting 40 million adults in the United States age 18 and older (18% of U.S. population). And although anxiety can manifest in extreme and sometimes crippling degrees of intensity, Western doctors are warming up to the understanding that a little bit of anxiety could be incredibly beneficial in the most unexpected ways. One research study out of Lakehead University discovered that people with anxiety scored higher on verbal intelligence tests. Another study conducted by the Interdisciplinary Center Herzliya in Israel found that people with anxiety were superior than other participants at maintaining laser-focus while overcoming a primary threat as they are being bombarded by numerous other smaller threats, thereby significantly increasing their chances of survival. The same research team also discovered that people with anxiety showed signs of “sentinel intelligence”, meaning they were able to detect real threats that were invisible to others (i.e. test participants with anxiety were able to detect the smell of smoke long before others in the group).

Another research study from the SUNY Downstate Medical Center in New York involved participants with generalized anxiety disorder (GAD). The findings revealed that people with severe cases of GAD had much higher IQ’s than those who had more mild cases. The theory is that “an anxious mind is a searching mind,” meaning children with GAD develop higher levels of cognitive ability and diligence because their minds are constantly examining ideas, information, and experiences from multiple angles simultaneously.

But perhaps most fascinating of all is a research study published by the National Institutes of Health and the National Center for Biotechnology Information involving participants with social anxiety disorder (i.e. social phobia). The researchers embarked on their study with the following thesis: “Individuals with social phobia (SP) show sensitivity and attentiveness to other people’s states of mind. Although cognitive processes in SP have been extensively studied, these individuals’ social cognition characteristics have never been examined before. We hypothesized that high-socially-anxious individuals (HSA) may exhibit elevated mentalizing and empathic abilities.” The research methods were as follows: “Empathy was assessed using self-rating scales in HSA individuals (n=21) and low-socially-anxious (LSA) individuals (n=22), based on their score on the Liebowitz social anxiety scale. A computerized task was used to assess the ability to judge first and second order affective vs. cognitive mental state attributions.”

Remarkably, the scientists found that a large portion of people with social anxiety disorder are gifted empaths — people whose right-brains are operating significantly above normal levels and are able to perceive the physical sensitivities, spiritual urges, motivations, and intentions of other people around them (see Dr. Jill Bolte Taylor’s TED Talk below for a powerful explanation of this ability). The team’s conclusion reads: “Results support the hypothesis that high-socially-anxious individuals demonstrate a unique profile of social-cognitive abilities with elevated cognitive empathy tendencies and high accuracy in affective mental state attributions.” To understand more about the traits of an empath you can CLICK HERE. And to see if you align with the 22 most common traits of an empath CLICK HERE.

Empaths who have fully embraced their abilities are able to function on a purely intuition-based level. As Steve Jobs once said, “[Intuition] is more powerful than intellect,” and in keeping with this appreciation, writer Carolyn Gregoire recently penned a fascinating feature entitled “10 Things Highly Intuitive People Do Differently” and you can read it in full by visiting HuffingtonPost.com. And to learn why Western medicine may be misinterpreting mental illness at large, be sure to read the fascinating account of Malidoma Patrice Somé, Ph.D. — a shaman and a Western-trained doctor. “In the shamanic view, mental illness signals the birth of a healer, explains Malidoma Patrice Somé. Thus, mental disorders are spiritual emergencies, spiritual crises, and need to be regarded as such to aid the healer in being born.” You can read the full story by reading “What A Shaman Sees In A Mental Hospital”. For more great stories about the human brain be sure to visit The Human Brain on FEELguide. (Sources: Business Insider, The Mind Unleashed, Huffington Post, photo courtesy of My Science Academy).

January 23, 2016 01:13 AM

January 22, 2016

Tim KentLearning more about EFI systems

I've been interested in the inner workings of cars and engines for a few years now, in particular engine management systems. One project that has my attention is MegaSquirt which is an Electronic Fuel Injection controller intended to teach people how EFI works. I've decided to get in on the action, but since my soldering skills are a bit rusty I'm starting with the MegaStim kit (much easier to build) which is a small diagnostics device used to test out MegaSquirt hardware. I purchased the kit from DIYAutoTune.com and I was impressed with the service, I'll get the rest of my MegaSquirt gear from them for sure. Even if I don't end up using MegaSquirt for anything, I'm sure it will still be worth it for the experience.

January 22, 2016 02:12 PM

Tim KentDealing with Japanese imports

A number of my friends have recently purchased Japanese cars. One of the minor issues that is often overlooked is the lack of ability to tune into any FM radio stations above 90MHz. Since I was working on one of these said cars this afternoon I thought it would be a good time to mention this problem. Among other things, be prepared to replace the stereo if you import!

Even into taking these things into consideration, Japanese imports are still great value! You get way more car for your dollar.

January 22, 2016 02:12 PM

Tim KentFight against dirty wheels

I don't know what it is with European cars but most of them come standard with brake pads that love to totally cover the wheels with carbon. I've attached a picture of what my wheels look like when it's been a couple of weeks since the last wash. Next service I'm supplying a set of EBC Greenstuff brake pads which should hopefully put an end to this problem, I'll keep you posted.

January 22, 2016 02:12 PM

Tim KentMoving from VMware Server to ESXi

At home I'm currently using VMware Server with Windows 2003 as the host OS. In addition to running 5 guest operating systems, the host OS performs the following tasks:

Lately I've been reading up on VMware ESXi and it appears as though my existing hardware is going to work, however I'm having a hard time deciding if the extra efficiency is worth the hassle. From what I've read I will have to find a different way to perform backups since local USB devices aren't supported, in addition I will have to provision a VM to perform the NAT and routing duties. On the other hand the I/O struggles at times with VMware Server so the extra performance and stability from ESXi would be welcomed; I've had VMware Server's NAT implementation crash twice during 18 months of use.

If anyone out there has made the move, I'd love to hear their experiences and feedback!

January 22, 2016 02:11 PM

Tim KentKilling the registration prompt in RawShooter Essentials 2006

I'm still using the final version of RawShooter Essentials as it supports my SLR's RAW format (Adobe have now bought out Pixmantec, so this is no longer being updated or supplied by them; it is only available from other sources such as download.com). So, if you've managed to acquire it you will find that whenever RawShooter Essentials is freshly installed it will prompt you to register each time you start the program. As Adobe have shut down Pixmantec's servers the registration will fail. I compared the Windows registry from a fresh install with an existing (registered) copy and found that the registered copy had some extra registry entries. With these entries you should be able to kill the annoying prompt:

Windows Registry Editor Version 5.00

[HKEY_LOCAL_MACHINE\SOFTWARE\Classes\CLSID\{0F540988-8449-4C30-921E-74BCCEA70535}]

[HKEY_LOCAL_MACHINE\SOFTWARE\Classes\CLSID\{0F540988-8449-4C30-921E-74BCCEA70535}\ProgId]
Save the above contents to a file with the extention ".REG" and double-click it to install the entries. You may have to fix the lines if they wrap. You will now find that next time you open the program the registration prompt will be gone.

January 22, 2016 02:11 PM

Tim KentArchiving files from my Topfield PVR

I've had a Topfield PVR for quite a few years now. The unit is great, I can't fault it really. Until recently I did however have one ongoing problem; I kept running out of space! To help combat the space problem I upgraded to a Samsung 400GB drive but this was only a short term band-aid.

The next solution was commissioning a Linksys NSLU2 running uNSLUng and ftpd-topfield to allow FTP access to the unit (my computer isn't anywhere near the TV and the Topfield only has a USB port). So the space problem on the Topfield was fixed, but I had loads of transport stream files sitting on my computer. It was just too expensive (time-wise) to edit out all the ads, convert to MPEG-2 and burn to DVD or DivX. So last weekend I scripted it:
Seems to work quite nicely, the ad detection works fairly well but it's not 100% perfect. One thing I had to do to get comskip working was rename the file extension from REC to TS.

The whole thing was fairly trivial after reading the CLI documentation for each program, but if you need a hand feel free to contact me.

January 22, 2016 02:11 PM

Tim KentOther efforts to stop Spam

Watching the logs closely after my Greylisting install made me notice just how many attempts are being made to deliver junk to my mail server. I thought I'd add some a few more checks to Postfix so the messages don't even make it to the Greylisting stage. The most effective ones are; requiring a fully qualified HELO string (you'd be surprised how many Spammers just use HELO localhost), and checking that the sender exists before accepting the message. This is done with reject_non_fqdn_hostname under smtpd_recipient_restrictions, and reject_unverified_sender under smtpd_sender_restrictions respectively. A good guide on how to set up sender verification can be found on the Postfix website.

January 22, 2016 02:11 PM

Tim KentDNS resolution on iPhone

I've been playing with a few iPhones lately and have had trouble getting WiFi working through our proxy. After much hair pulling the problem turns out to be a feature in the iPhone DNS resolver that refuses to look up any hostname ending in ".local". This also appears to be a problem on Mac OS X:

http://support.apple.com/kb/HT2385?viewlocale=en_US

With OS X you can add "local" to the Search Domains field and disable this behaviour, unfortunately it doesn't work for the iPhone.

January 22, 2016 02:11 PM

Tim KentData destruction

After cleaning my home office I was left with some old hard drives to dispose of, this got me thinking about data destruction. In the past I cleared my drives with a couple of passes of random data using dd, but is this thorough enough?

This time round I have used a free bootable CD called CopyWipe (great utility, BootIt NG is also worth a mention). Each drive was given 5 passes, and then taken to with a hammer just to be sure. I've linked a picture to the "after" shot.

I can see data destruction being a larger problem as time goes on. I'd be interested to know the techniques others use for this problem.

January 22, 2016 02:11 PM

Tim KentInteresting Microsoft disclaimer

Hotmail's disclaimer caught my attention whilst I was troubleshooting an Internet connection using netcat. Looks like they're either hiring the Mafia to do their dirty work, or taking matters into their own hands:
Violations will result in use of equipment located in California and other states.
This disclaimer is displayed when you first connect to the SMTP port of mail.hotmail.com. I have no idea what kind of torture the disclaimer implies but I'd rather not find out!

January 22, 2016 02:11 PM

Tim KentSick of Spam

For some reason, Spammers love my e-mail address. I'm guessing it's one of my posts to Usenet where they have harvested my particular address. I've been looking into Greylisting recently to help combat the situation, specifically Postgrey (I use the Postfix MTA).

Since setting this up, I have yet to receive an unsolicited message! That's a big difference considering I was getting around 50 junk messages a day beforehand. Installation was very simple under Debian as it has already been packaged, so you just need to apt-get it and add a check_policy_service line to your smtpd_recipient_restrictions in main.cf.

January 22, 2016 02:11 PM

Tim KentVoIP headaches

I've recently signed up with PennyTel to get better prices on phone calls. This was after two relatives of mine both recommended PennyTel and said how easy the whole thing was to set up when using a Linksys SPA-3102.

OK, so I signed up and purchased the Linksys device. I set the networking stuff through the phone then followed the guide on the PennyTel website to configure SIP (VoIP connectivity stuff). I was feeling pretty good about the whole thing, that is until I made the first phone call!

I thought I'd try to impress a mate so I called up one of my tech savvy friends and told them I was using VoIP to talk to them. The quality sounded quite good, then after 32 seconds the call dropped out! I had called a mobile so I thought it may just be a glitch. The next two calls resulted in the same drop out after 32 seconds. By this stage my friend thought it was quite amusing that my new phone service was so unreliable after I had been boasting about the cheap call rates!

After hours of Googling and messages back and forth between PennyTel support, I still hadn't managed to avoid the call drop out, or another intermittent problem where the SIP registration was randomly failing. The settings looked fine, and PennyTel didn't appear to have any outages as I tested things with a soft phone from another DSL connection. I was really regretting the whole thing, and getting pretty pissed off. I had a think about the whole scenario, and the only thing I hadn't eliminated was my DrayTek Vigor 2600We ADSL router. I had already set the port forwards required for the Linksys SPA (UDP 5060-5061 and 16384-16482) so thought nothing more of router configuration. As a last resort, I searched the Internet for people running VoIP through their DrayTek to see if any incompatibilities existed. I came across a site with someone experiencing my exact problem, and they had a workaround! It appears that the 2600We has a SIP application layer proxy enabled by default. This really confuses things on the Linksys and has to be disabled. After telnetting to the device and entering the following command, things were working great:

sys sip_alg 0

Note that you may need to upgrade your DrayTek firmware for this command to be available.

After the changes I made some calls and no longer got disconnected after 32 seconds! Woohoo! At the end of the day I'm glad I chose VoIP for the cost savings, even though it caused me grief the first few days.

Update: One other setting I have found needed a bit of tweaking was the dial plan. Here is my current Brisbane dial plan for an example:

(000S0<:@gw0>|<:07>[3-5]xxxxxxxS0|0[23478]xxxxxxxxS0|1[38]xx.<:@gw0>|19xx.!|xx.)

January 22, 2016 02:11 PM

Tim KentKeeping up with multiple Blogs

After a quick search for RSS aggregators, I found Google Reader to do exactly what I want. It saves so much of my time monitoring RSS feeds from a central location, and being web-based means I can access it from anywhere. In case you are wondering, I don't work for Google!

Update: Since publishing this, I've had a few people tell me how much Google Reader sucks compared to the alternatives out there. Greg Black recommended Bloglines to me, and after giving it a go I must say that it's a much better solution. Looks like I'll stick with this one for a while.

January 22, 2016 02:11 PM

Tim KentBlackBerry MDS proxy pain

I'm just having a rant about MDS SSL connections through a proxy. Non-SSL traffic will work fine, however SSL traffic appears to go direct even when proxy settings have been defined as per KB11028. My regular expression matches the addresses fine.

Surely people out there want/need to proxy all their BES MDS traffic?

January 22, 2016 02:11 PM

Tim KentA potential backup solution for small sites running VMware ESXi

Today, external consumer USB3 and/or eSATA drives can be a great low cost alternative to tape. For most small outfits, they fulfil the speed and capacity requirements for nightly backups. I use the same rotation scheme with these drives as I did tape with great success.

Unfortunately these drives can't easily be utilised by those running virtualised servers on top of ESXi. VMware offers SCSI pass-through as a supported option, however the tape drives and media are quite expensive by comparison.

VMware offered a glimpse of hope with their USB pass-through introduced in ESXi 4.1, but it proved to have extremely poor throughput (~7MB/sec) so can realistically only shift a couple of hundred GB at most per night.

I have trialled some USB over IP devices; the best of these can lift the throughput from ~7MB/sec to ~25MB/sec, but the drivers can be problematic and are often only available for Windows platforms.

This got me thinking about presenting a USB3 controller via ESXi's VMDirectPath I/O feature.

VMDirectPath I/O requires a CPU and motherboard capable of Intel Virtualization Technology for Directed I/O (VT-d) or AMD IP Virtualization Technology (IOMMU). It also requires that your target VM is at a hardware level of 7 or greater. A full list of requirements can be found at http://kb.vmware.com/kb/1010789.

I tested pass-through on a card with the NEC/Renesas uPD720200A chipset (Lindy part # 51122) running firmware 4015. The test VM runs Windows Server 2003R2 with the Renesas 2.1.28.1 driver. I had to configure the VM with pciPassthru0.msiEnabled = "FALSE" as per http://www.vmware.com/pdf/vsp_4_vmdirectpath_host.pdf or the device would show up with a yellow bang in Device Manager and would not function.

The final result - over 80MB/sec throughput (both read and write) from a Seagate 2.5" USB3 drive!

January 22, 2016 02:10 PM

January 12, 2016

Ben FowlerJava gets monads

Something which came out of a mentoring session at work today: the guy leading the discussion used Optional<T> everywhere, but subsequently handled them in a fairly raw, naive way. https://dzone.com/articles/whats-wrong-java-8-part-iv It got me thinking: where's Java's for-comprehension syntax and why is it missing? Well, it would seem that the Stream APIs go much of the way to filling this

January 12, 2016 05:55 PM

January 06, 2016

Anthony TownsBitcoin Fees in History

Prior to Christmas, Rusty did an interesting post on bitcoin fees which I thought warranted more investigation. My first go involved some python parsing of bitcoin-cli results; which was slow, and as it turned out inaccurate — bitcoin-cli returns figures denominated in bitcoin with 8 digits after the decimal point, and python happily rounds that off, making me think a bunch of transactions that paid 0.0001 BTC in fees were paying 0.00009999 BTC in fees. Embarrassing. Anyway, switching to bitcoin-iterate and working in satoshis instead of bitcoin just as Rusty did was a massive improvement.

From a miner’s perspective (ie, the people who run the computers that make bitcoin secure), fees are largely irrelevant — they’re receiving around $11000 USD every ten minutes in inflation subsidy, versus around $80 USD in fees. If that dropped to zero, it really wouldn’t make a difference. However, in around six months the inflation subsidy will halve to 12.5 BTC; which, if the value of bitcoin doesn’t rise enough to compensate, may mean miners will start looking to turn fee income into real money — earning $5500 in subsidy plus $800 from fees could be a plausible scenario, eg (though even that doesn’t seem likely any time soon).

Even so, miners don’t ignore fees entirely even now — they use fees to choose how to fill up about 95% of each block (with the other 5% filled up more or less according to how old the bitcoins being spent are). In theory, that’s the economically rational thing to do, and if the theory pans out, miners will keep doing that when they start trying to get real income from fees rather than relying almost entirely on the inflation subsidy. There’s one caveat though: since different transactions are different sizes, fees are divided by the transaction size to give the fee-per-kilobyte before being compared. If you graph the fee paid by each kB in a block you thus get a fairly standard sort of result — here’s a graph of a block from a year ago, with the first 50kB (the priority area) highlighted:

block

You can see a clear overarching trend where the fee rate starts off high and gradually decreases, with two exceptions: first, the first 50kB (shaded in green) has much lower fees due to mining by priority; and second, there are frequent short spikes of high fees, which are likely produced by high fee transactions that spend the coins mined in the preceeding transaction — ie, if they had been put any earlier in the block, they would have been invalid. Equally, compared to the priority of the first 50kB of transactions, the the remaining almost 700kB contributes very little in terms of priority.

But, as it turns out, bitcoin wallet software often pretty much just tends to pick a particular fee and use it for all transactions no matter the size:

block-raw-fee

From the left hand graph you can see that, a year ago, wallet software was mostly paying about 10000 satoshi in fees, with a significant minority paying 50000 satoshi in fees — but since those were at the end of the block, which was ordered by satoshis per byte, those transactions were much bigger, so that their fee/kB was lower. This seems to be due to some shady maths: while the straightforward way of doing things would be to have a per-byte fee and multiply that by the transaction’s size in bytes, eg 10 satoshis/byte * 233 bytes gives 2330 satoshi fee; things are done in kilobytes instead, and a rounding mistake occurs, so rather than calculating 10000 satoshis/kilobyte * 0.233 kilobytes, the 0.233 is rounded up to 1kB first, and the result is just 10000 satoshi. The second graph reverses the maths to work out what the fee/kilobyte (or part thereof) figure would have been if this formula was used, and for this particular block, pretty much all the transactions look how you’d expect if exactly that formula was used.

As a reality check, 1 BTC was trading at about $210 USD at that time, so 10000 satoshi was worth about 2.1c at the time; the most expensive transaction in that block, which goes off the scale I’ve used, spent 240000 satoshi in fees, which cost about 50c.

Based on this understanding, we can look back through time to see how this has evolved — and in particular, if this formula and a few common fee levels explain most transactions. And it turns out that they do:

stdfees

The first graph is essentially the raw data — how many of each sort of fee made it through per day; but it’s not very helpful because bitcoin’s grown substantially. Hence the second graph, which just uses the smoothed data and provides the values in percentage terms stacked one on top of the other. That way the coloured area lets you do a rough visual comparison of the proportion of transactions at each “standard” fee level.

In fact, you can break up that graph into a handful of phases where there is a fairly clear and sudden state change between each phase, while the distribution of fees used for transactions during that phase stays relatively stable:

stdfee-epochs

That is:

  1. in the first phase, up until about July 2011, fees were just getting introduced and most people paid nothing; fees began at 1,000,000 satoshi (0.01 BTC) (v 0.3.21) before setting on a fee level of 50000 satoshi per transaction (0.3.23).
  2. in the second phase, up until about May 2012, maybe 40% of transactions paid 50000 satoshi per transaction, and almost everyone else didn’t pay anything
  3. in the third phase, up until about November 2012, close to 80% of transactions paid 50000 satoshi per transaction, with free transactions falling to about 20%.
  4. in the fourth phase, up until July 2013, free transactions continue to drop, however fee paying transactions split about half and half between paying 50000 satoshi and 100000 satoshi. It looks to me like there was an option somewhere to double the default fee in order to get confirmed faster (which also explains the 20000 satoshi fees in future phases)
  5. in the fifth phase, up until November 2013, the 100k satoshi fees started dropping off, and 10k satoshi fees started taking over (v 0.8.3)
  6. in the sixth phase, the year up to November 2014, transactions paying fees of 50k and 100k and free transactions pretty much disappeared, leaving 75% of transactions paying 10k satoshi, and maybe 15% or 20% of transactions paying double that at 20k satoshi.
  7. in the seventh phase, up until July 2015, pretty much everyone using standard fees had settled on 10k satoshi, but an increasing number of transactions started using non-standard fees, presumably variably chosen based on market conditions (v 0.10.0)
  8. in the eighth phase, up until now, things go a bit haywire. What I think happened is the “stress tests” in July and September caused the number of transactions with variable fees to spike substantially, which caused some delays and a lot of panic, and that in turn caused people to switch from 10k to higher fees (including 20k), as well as adopt variable fee estimation policies. However over time, it looks like the proportion of 10k transactions has crept back up, presumably as people remove the higher fees they’d set by hand during the stress tests.

Okay, apparently that was part one. The next part will take a closer look at the behaviour of transactions paying non-standard fees over the past year, in particular to see if there’s any responsiveness to market conditions — ie prices rising when there’s contention, or dropping when there’s not.

January 06, 2016 03:51 PM

December 30, 2015

Ben FowlerUsing Gravizo to embed UML diagrams and graphs into GitHub-flavoured Markdown

This is insane, but it works: Gravizo is a website that lets you insert URLs with embedded diagrams in them, into your HTML for Markdown documents, and have them render, e.g. <img src='http://g.gravizo.com/g? digraph G { main -> parse -> execute; main -> init; main -> cleanup; execute -> make_string; execute -> printf init -> make_string; main -> printf; execute ->

December 30, 2015 04:06 PM

December 29, 2015

Ben FowlerCuring forking annoyances in GitHub by scripting the Public REST API

I had a minor annoyance today, where I had to find out which of our thirty-odd developers touched a particular file in our source tree, in order to fix a problem blocking our testers from working. We use GitHub Enterprise, so we are using a forking branching model to work. This means that work could lurk out of sight on other peoples' forks of the upstream repository. This sort of thing calls

December 29, 2015 08:36 PM

December 17, 2015

Ben Martin17 Segments are the new Red.

While 7 Segment displays are quite common, the slightly more complex cousin the 17 segment display allows you to show the A-Z range from English and also some additional symbols due to the extra segments.

The unfortunate part of the above kit which I got from Akizukidenshi is that the panel behind the 17 segger effectively treats the display as a 7 segger. So you get some large 7 segment digits but can never display an "A" for example. Although this suits the clock that the kit builds just fine, there is no way I could abide wasting such a nice display by not being able to display text. With the esp8266 and other wifethernet solutions around at a low price point it is handy to be able to display the wind speed, target "feels like" temperature etc as well as just the time.

With that in mind I have 3 of these 17 seggers breadboarded with a two transistor highside and custom lowside using an mxp23017 pin muxer driving two 2803 current sinks which are attached using 8 up 330 ohm resistors in IC blocks. This lowside is very useful because with a little care it can be setup on a compact piece of stripboard. All the controlling MCU needs is I2C and it can switch all the cathodes just fine.

While experimenting I found some nice highside driver ICs. I now have custom PCB on their way which can each drive 2 displays and can chain left and right to allow an arbitrary number of displays to be connected into a single line display. More info an photos to follow, I just noticed that I hadn't blogged in a while so thought I'd drop this terse post up knowing that more pictures and video are to come. I also have the digits changing through a fade effect. It is much less distracting to go from time to temp and back if you don't jump the digits in one go.

December 17, 2015 11:42 AM

December 13, 2015

Clinton RoyYOW Brisbane 2015

The YOW conference is very kind to local meetup organisers, I was lucky enough to be offered a ticket in return for introducing a couple of sessions.

Monday

Keynote: Adrian Cockcroft

Complexity, understanding, composition and abstraction.

Past, Present and Future of Java: Georges Saab

Some of the new fp/multi core stuff slowly coming down the pipeline. I’ve always had high expectations for Java and the surrounding environment, but every time I’ve used it I’ve been very disappointed. There’s a lot to be said for backwards compatibility, but not at the cost of destroying all the good will your development community has. The changes portrayed in this talk are quite interesting.

Play in C#: Mads Torgersen

This was a highlight of the conference for me. The Roslyn project basically inverted the Microsoft compiler from a sink to a filter which lets it be hooked up directly to the IDE. The live example was adding a linter to the IDE to complain about blocks of code not in brace extensions, complete with one click fixup. It was all very impressive.

Writing a writer: Richard P. Gabriel

Generating poems that get judged to be written by humans, all in lisp of course.

Keynote: Don Reinertsen

This was a very interesting discussion on the natural reaction in an uncertain world: making systems robust. At the very best, the most robust system (robustest? :) will be able to handle the most chaotic world and bring system performance back to normal. This talk asks us to think about the notion of a system that can actually improve in a chaotic world. The theoretic model is based on the financial idea of increasing risk implying increasing returns.

The Future of Software Engineering: Glenn Vanderburg

This was a very interesting talk on the nature of engineering, and how software engineering fits into the discipline. A highlight.

The Miracle of Generators: Bodil Stokke

This was an FP talk, I’m not a fan of bait-and-switch talks.

Tuesday:

NASA Keynote: Anita Sengupta and Kamal Oudrhiri

It’s interesting to be in a room full of engineers being exposed to different engineering requirements.

Agile is Dead: Dave Thomas

A great simplification of the underlying ideas of how to have agility.

Sometimes the Questions are Complicated, but the Answers are Simple: Indu Alagarsamy

A highlight of the conference overall, a talk about a healthy family culture butting up against backwards societal culture.

Keynote: Kathleen Fisher

Formal processes work, but we’re decades off being able to use them for day to day work.

Always Keep a Benchmark in your Back Pocket: Simon Garland

Some rules to keep in mind around designing  benchmark, plus the idea of always doing benchmarking as a way of defending development work to management keen on outsourcing.

Transcript: Jonathan Edwards

One of the talks I chaired.  A very interesting document and form based programming language for non-programmers to use, in the style of hypercard.

The Mother of all Programming Languages Demos: Sean McDirmid

One of the talks I chaired. More interesting ideas coming out of Microsoft. This was heavily based on physical interfaces, I struggled to think how it would apply to regular programming.

 

 

 

 

 

 

 

 

 


Filed under: Uncategorized

December 13, 2015 07:31 AM

December 05, 2015

Ben Fowlerzsh

I wanted a more powerful shell for work, so I used fish for a while. However, I found it jarring, since while they have a nice setup out of the box, like syntax highlighting; it's not actually bash-compatible which is annoying when you go to do something nontrivial on the command-line. Enter zsh. I had been giving it a wide berth because of it's notoriety for complexity, but I think I've

December 05, 2015 10:11 PM

Ben FowlerWriting a Spotify client in Emacs Lisp (and Helm) in 16 minutes

This is impressive work. There are quite a few tricks here, that I could repurpose to the kind of things I'm doing at work. Demonstrates the power of an editor which is so easily scriptable.

December 05, 2015 06:46 PM


Last updated: December 05, 2016 02:15 AM. Contact Humbug Admin with problems.