HUMBUG logo


HUMBUGers

Feed: rss rdf opml

April 18, 2016

Ben MartinMaking PCB with a hobby CNC machine

One of the main goals I had in mind when getting a CNC "engraving" machine was to make PCB at home. It's sort of full circle to the '70s I guess. Only instead of using nasty chemicals I just have the engraver scratch off an isolation path between traces. Or so the plan goes.


My "hello world" board is the above controller for a 3d printer. This is a follow up to the similar board I made to help use the CNC itself. For a 3d printer I added buttons to set Z=0.1 height and a higher Z height to aid in homing. The breakout headers on the bottom right are for the ESP8266 daughter board. The middle chip is an MCP32017 gpio extender. I've had good experiences using TWI on the ESP8266 and the MCP overcomes the pin limitations quite nicely. It also gives all the buttons a nice central place to go :)

The 3v3 regulator makes the whole show a plug in the AA pack and go type board. The on/off switch is the physical connection to an external battery.

One step closer to the design in the morning, physically create in the afternoon, and use in the evening goal.

April 18, 2016 08:25 AM

April 12, 2016

Blue HackersExplainer: what’s the link between insomnia and mental illness?

April 12, 2016 01:21 AM

April 02, 2016

Blue HackersOSMI Mental Health in Tech Survey 2016

April 02, 2016 02:31 AM

March 16, 2016

Blue HackersJust made a bad decision? Perhaps anxiety is to blame

http://medicalxpress.com/news/2016-03-bad-decision-anxiety-blame.html

Most people experience anxiety in their lives. For some, it is just a bad, passing feeling, but, for many, anxiety rules their day-to-day lives, even to the point of taking over the decisions they make.

Scientists at the University of Pittsburgh have discovered a mechanism for how anxiety may disrupt decision making. In a study published in The Journal of Neuroscience, they report that anxiety disengages a region of the brain called the prefrontal cortex (PFC), which is critical for flexible decision making. By monitoring the activity of neurons in the PFC while anxious rats had to make decisions about how to get a reward, the scientists made two observations. First, anxiety leads to bad decisions when there are conflicting distractors present. Second, bad decisions under anxiety involve numbing of PFC neurons.

March 16, 2016 12:11 AM

March 04, 2016

Ben FowlerContract-first RESTful API development with Spring

At work, my architect gave me an interesting task of building a public API on our application. Fairly straightforward as far as development tasks go of course, but this time, I was asked to take a contract-first approach. Write the schema first in a text editor, treat that as the golden source of truth, and then build a RESTful API to adhere to it. We chose Swagger 2. A general observation:

March 04, 2016 01:01 AM

February 26, 2016

Tim KentFluke 17B+ multimeter review

This is just a quick and honest review of my Fluke 17B+ multimeter.

Pros:


Cons:


The included TL75 test leads seem sufficient and seem to be of a reasonable quality.



As a hobbyist, this multimeter should serve me well. If you need more precision or you do AC measurements you might want to consider something like the Brymen BM257S.

I had assumed the 17B+ would have featured auto hold, so I was a bit disappointed about that (in fact it probably would have swayed me toward the BM257S) but I'm certainly not disappointed in the overall product.


February 26, 2016 10:38 PM

Tim KentFirst play with the BitScope Micro

I picked up a BitScope Micro to teach myself a bit about oscilloscopes. So far I've only used the first analog channel to monitor the wave generator, but it seems to do what it says on the box:


The test leads aren't the best quality, but otherwise this looks like a useful tool. You can't expect it to be in the same league as the professional gear, but for the money it's great for learning.


February 26, 2016 10:38 PM

Tim KentAldi 3D printer

I picked up an Aldi "Cocoon Create" 3D printer this morning and spent a bit of time trying it out. It appears to be a rebranded Wanhao I3:


The unit was packed really well and included a few goodies including a really comprehensive printed manual:


Putting the unit together was really simple, here's the unit after following the quick start guide:




It's using Jinsanshi Motor stepper motors, they seem to be fairly quiet:


The earth pin is connected:


Here's a photo I took during bed levelling:


This cord was pinching a bit, so I pulled it upward to resolve the issue:


First print:


All done:


Improving the machine already:



I'm really quite impressed with this machine especially given the price point. I was up and away printing a job within 15 minutes of opening the box.

The only thing I can whine about is the power supply fan which is unnecessarily loud. You certainly wouldn't want to have the thing running all day in the same room in an office environment.

February 26, 2016 10:38 PM

February 21, 2016

Clinton RoySouth Coast Track

I’m planning on doing this walk before linux.conf.au 2017. I’m interested in a couple of experienced hikers to join me. It starts at Melaleuca and finishes at Cockle creak.

<iframe frameborder="0" height="450" marginheight="0" marginwidth="0" scrolling="no" src="https://www.google.com/maps/embed?pb=!1m28!1m12!1m3!1d185317.5636110323!2d146.3644057152049!3d-43.46922022236897!2m3!1f0!2f0!3f0!3m2!1i1024!2i768!4f13.1!4m13!3e2!4m5!1s0xaa694b2fc9fde76f:0x1f60d21664b16e3!2sMelaleuca TAS 7001!3m2!1d-43.4219781!2d146.16299229999998!4m5!1s0xaa6c0e9e00c28065:0x5fdf6db3819e95de!2sCockle Creek Rd, Recherche TAS 7109!3m2!1d-43.5553528!2d146.8842032!5e0!3m2!1sen!2sau!4v1456030469240" width="600"></iframe>

Important Points

This is a reasonably serious hike, you need to have plenty of multi day hike experience to join me.

Todo:

Links:


Filed under: Uncategorized

February 21, 2016 04:15 AM

February 18, 2016

Anthony TownsBitcoin Fees vs Supply and Demand

Continuing from my previous post on historical Bitcoin fees… Obviously history is fun and all, but it’s safe to say that working out what’s going on now is usually far more interesting and useful. But what’s going on now is… complicated.

First, as was established in the previous post, most transactions are still paying 0.1 mBTC in fees (or 0.1 mBTC per kilobyte, rounded up to the next kilobyte).

fpb-10k-txns

Again, as established in the previous post, that’s a fairly naive approach: miners will fill blocks with the smallest transactions that pay the highest fees, so if you pay 0.1 mBTC for a small transaction, that will go in quickly, but if you pay 0.1 mBTC for a large transaction, it might not be included in the blockchain at all.

It’s essentially like going to a petrol station and trying to pay a flat $30 to fill up, rather than per litre (or per gallon); if you’re riding a scooter, you’re probably over paying; if you’re driving an SUV, nobody will want anything to do with you. Pay per litre, however, and you’ll happily get your tank filled, no matter what gets you around.

But back in the bitcoin world, while miners have been using the per-byte approach since around July 2012, as far as I can tell, users haven’t really even had the option of calculating fees in the same way prior to early 2015, with the release of Bitcoin Core 0.10.0. Further, that release didn’t just change the default fees to be per-byte rather than (essentially) per-transaction; it also dynamically adjusted the per-byte rate based on market conditions — providing an estimate of what fee is likely to be necessary to get a confirmation within a few blocks (under an hour), or within ten or twenty blocks (two to four hours).

There are a few sites around that make these estimates available without having to run Bitcoin Core yourself, such as bitcoinfees.21.co, or bitcoinfees.guthub.io. The latter has a nice graph of recent fee rates:

bitcoinfees-github

You can see from this graph that the estimated fee rates vary over time, both in the peak fee to get a transaction confirmed as quickly as possible, and in how much cheaper it might be if you’re willing to wait.

Of course, that just indicates what you “should” be paying, not what people actually are paying. But since the blockchain is a public ledger, it’s at least possible to sift through the historical record. Rusty already did this, of course, but I think there’s a bit more to discover. There’s three ways in which I’m doing things differently to Rusty’s approach: (a) I’m using quantiles instead of an average, (b) I’m separating out transactions that pay a flat 0.1 mBTC, (c) I’m analysing a few different transaction sizes separately.

To go into that in a little more detail:

The following set of graphs take this approach, with each transaction size presented as a separate graph. Each graph breaks the relevant transactions into sixths, selecting the sextiles separating each sixth — each sextile is then smoothed over a 2 week period to make it a bit easier to see.

fpb-by-sizes

We can make a few observations from this (click the graph to see it at full size):

As foreshadowed, we can redo those graphs with transactions paying one of the standard fees (ie exactly 0.1 mBTC, 0.01 mBTC, 0.2 mBTC, 0.5 mBTC, 1m BTC, or 10 mBTC) removed:

fpb-by-sizes-nonstd

As before, we can make a few observations from these graphs:

At this point, it’s probably a good idea to check that we’re not looking at just a handful of transactions when we remove those paying standard 0.1 mBTC fees. Graphing the number of transactions per day of each type (ie, total transactions, 220 byte transactions (1-input, 2-output), 370 byte transactions (2-input, 2-output), 520 byte transactions (3-input, 2-output), and 1kB transactions shows that they all increased over the course of the year, and that there are far more small transactions than large ones. Note that the top-left graph has a linear y-axis; while the other graphs use a logarithmic y-axis — so that each step in the vertical indicates a ten-times increase in number of transactions per day. No smoothing/averaging has been applied.

fpb-number-txns

We can see from this that by and large the number of transactions of each type have been increasing, and that the proportion of transactions paying something other than the standard fees has been increasing. However it’s also worth noting that the proportion of 3-input transactions using non-standard fees actually decreased in November — which likely indicates that many users (or the maintainers of wallet software used by many users) had simply increased the default fee temporarily while concerned about the stress test, and reverted to defaults when the concern subsided, rather than using a wallet that estimates fees dynamically. In any event, by November 2015, we have at least about a thousand transactions per day at each size, even after excluding standard fees.

If we focus on the sextiles that roughly converge to the trend line we used earlier, we can, in fact make a very interesting observation: after November 2015, there is significant harmonisation on fee levels across different transaction sizes, and that harmonisation remains fairly steady even as the fee level changes dynamically over time:

fpb-fee-market

Observations this time?

Along with the trend line, I’ve added four grey, horizontal guide lines on those graphs; one at each of the standard fee rates for the transaction sizes we’re considering (0.1 mBTC/kB for 1000 byte transactions, 0.19 mBTC/kB for 520 byte transactions, 0.27 mBTC/kB for 370 byte transactions, and 0.45 mBTC/kB for 220 byte transactions).

An interesting fact to observe is that when the market rate goes above any of the grey dashed lines, then transactions of the corresponding size that just pay the standard 0.1 mBTC fee become now less profitable to mine than transactions that pay the fees at the market rate. In a very particular sense this will induce a “fee event”, of the type mentioned earlier. That is, with the fee rate above 0.1 mBTC/kB, transactions of around 1000 bytes that pay 0.1 mBTC will generally suffer delays. Following the graph, for the transactions we’re looking at there have already been two such events — a fee event in July 2015, where 1000 byte transactions paying standard fees began getting delayed regularly due to the market fees began exceeding 0.1 mBTC/kB (ie, the 0.1 mBTC fee divided by 1 kB transaction size); and following that a second fee event during November impacting 3-input, 2-output transactions, due to market fees exceeding 0.19 mBTC/kB (ie, 0.1 mBTC divided by 0.52 kB). Per the graph, a few of the trend lines are lingering around 0.27 mBTC/kB, indicating a third fee event is approaching, where 370 byte transactions (ie 2-input, 2-output) paying standard fees will start to suffer delayed confirmations.

However the grey lines can also be considered as providing “resistance” to fee increases — for the market rate to go above 0.27 mBTC/kB, there must be more transactions attempting to pay the market rate than there were 2-input, 2-output transactions paying 0.1 mBTC. And there were a lot of those — tens of thousands — which means market fees will only be able to increase with higher adoption of software that calculates fees using dynamic estimates.

It’s not clear to me why fees harmonised so effectively as of November; my best guess is that it’s just the result of gradually increasing adoption, accentuated by my choice of quantiles to look at, along with averaging those results over a fortnight. At any rate, most of the interesting activity seems to have taken place around August:

Of course, obviously many wallets still don’t do per-byte, dynamic fees as far as I can tell:

Summary

February 18, 2016 09:40 AM

February 08, 2016

Ben FowlerSwagger sucks

I am surprised at how much of a step backwards JSON-based web services using "standards" like Swagger are, compared to SOAP. The tooling (especially Swagger-anything) is amateur-hour, dilettante crap, written by teenagers with short attention spans, in shoddy JS (or working Java that compiles, if you're lucky), to a shoddy, incomplete specification which nobody bothers to adhere to, or implement

February 08, 2016 10:19 PM

Ben FowlerSwagger is garbage

I am shocked at how much of a step backwards JSON-based web services are, compared to SOAP. The tooling (especially Swagger-anything) is unadulterated, amateur-hour, dilettante garbage, written by teenagers with short attention spans, in shoddy JS (or working Java that compiles, if you're lucky), to a shoddy, incomplete specification which nobody bothers to adhere to, or implement correctly or

February 08, 2016 09:52 PM

January 28, 2016

Ben MartinCNC Control with MQTT

I recently upgraded a 3040 CNC machine by replacing the parallel port driven driver board with a smoothieboard. This runs a 100Mhz Cortex-M mcu and has USB and ethernet interfaces, much more modern. This all lead me to coming up with a new controller to move the cutting head, all without needing to update the controller box or recompile or touch the smoothieboard firmware.



I built a small controller box with 12 buttons on it and shoved an esp8266 into that box with a MCP23017 chip to allow access to 16 gpio over TWI from the esp mcu. The firmware is fairly simple on the esp, it enables the internal pull ups on all gpio pins on the 23017 chip and sends an MQTT message when each button is pressed and released. The time since MCU boot in milliseconds is sent as the MQTT payload. This way, one can work out if this is a short or longer button press and move the cutting head a proportional distance.

The web interface for smoothie provides a pronterface like interface for manipulating where the cutting head is on the board and the height it is at. So lucky that it's open source firmware so I can see the non obfuscated javascript that the web interface uses. Then work out the correct POST method to send gcode commands directly to the smoothieboard on the CNC.

The interesting design here is using software on the server to make the controller box meet the smoothieboard. On the server MQTT messages are turned into POST requests using mqtt-launcher. The massive benefit here is that I can change what each button does on the CNC without needing to reprogram the controller or modify the cnc firmware. Just change the mqtt-launcher config file and all is well. So far MQTT is the best "IoT" tech I've had the privilege to use.



I'll probably build another controller for controlling 3d printers. Although most 3d printers just home each axis there is sometimes some pesky commands that must be run at startup, to help home z-axis for example. Having physical buttons to move the x axis down by 4mm, 1mm and 0.1mm makes it so much less likely to fat finger the web interface and accidentally crash the bed by initiating a larger z-axis movement than one had hoped for.

January 28, 2016 10:22 AM

January 23, 2016

Tim KentModifying the (Aldi) Lumina coffee grinder to produce a finer grind

I purchased one of the Lumina coffee grinders from Aldi a few years ago, but only just tried to use it for the first time recently. Even on the finest setting it was producing grinds that were very coarse. The product really should be marketed as a herb grinder in standard form.


As this machine uses burrs to do the grinding I thought I'd try shimming one of the burrs to bring the two closer together, hopefully resulting in a finer grind. This actually ended up working really well, and I've been using it at least three times a week for a few months now for all my coffee grinding duties!

Here's what you will need to do the modification:


First of all you'll want to remove the top burr assembly, which you can do by turning it clockwise:


You can see the tabs that lock it into place here:


Once you've removed it, flip it upside down and you should be able to see three Phillips head screws, you'll want to remove these:


Lift the metal burr away from the rest of the plastic assembly and you will then be able to fit the three 8x1mm washers as per the picture:


As you can see this has shimmed the burr by 1mm:


Fit the burr back onto the assembly and fasten the screws, don't go overboard with force though as they are just screwing into plastic.

Adjust the machine to the coarse setting and insert the top burr assembly, lock it into place by turning anti-clockwise.

Test out some grinding!

January 23, 2016 04:21 AM

Blue HackersScience on High IQ, Empathy and Social Anxiety | Feelguide.com

http://www.feelguide.com/2015/04/22/science-links-anxiety-to-high-iqs-sentinel-intelligence-social-anxiety-to-very-rare-psychic-gift/

Although Western medicine has radically transformed our world for the better, and given rise to some of the most remarkable breakthroughs in human history, in some ways it is still scratching at the lower slopes of the bigger picture. Only recently have our health systems begun to embrace the healing power of some ancient Eastern traditions such as meditation, for example. But overall, nowhere across the human health spectrum is Western medicine more unknowledgeable than in the realm of mental health. The human brain is the most complex biological machine in the known Universe, and our understanding of its inner workings is made all the more challenging when we factor in the symbiotic relationship of the mind-body connection.

When it comes to the wide range of diagnoses in the mental health spectrum, anxiety is the most common — affecting 40 million adults in the United States age 18 and older (18% of U.S. population). And although anxiety can manifest in extreme and sometimes crippling degrees of intensity, Western doctors are warming up to the understanding that a little bit of anxiety could be incredibly beneficial in the most unexpected ways. One research study out of Lakehead University discovered that people with anxiety scored higher on verbal intelligence tests. Another study conducted by the Interdisciplinary Center Herzliya in Israel found that people with anxiety were superior than other participants at maintaining laser-focus while overcoming a primary threat as they are being bombarded by numerous other smaller threats, thereby significantly increasing their chances of survival. The same research team also discovered that people with anxiety showed signs of “sentinel intelligence”, meaning they were able to detect real threats that were invisible to others (i.e. test participants with anxiety were able to detect the smell of smoke long before others in the group).

Another research study from the SUNY Downstate Medical Center in New York involved participants with generalized anxiety disorder (GAD). The findings revealed that people with severe cases of GAD had much higher IQ’s than those who had more mild cases. The theory is that “an anxious mind is a searching mind,” meaning children with GAD develop higher levels of cognitive ability and diligence because their minds are constantly examining ideas, information, and experiences from multiple angles simultaneously.

But perhaps most fascinating of all is a research study published by the National Institutes of Health and the National Center for Biotechnology Information involving participants with social anxiety disorder (i.e. social phobia). The researchers embarked on their study with the following thesis: “Individuals with social phobia (SP) show sensitivity and attentiveness to other people’s states of mind. Although cognitive processes in SP have been extensively studied, these individuals’ social cognition characteristics have never been examined before. We hypothesized that high-socially-anxious individuals (HSA) may exhibit elevated mentalizing and empathic abilities.” The research methods were as follows: “Empathy was assessed using self-rating scales in HSA individuals (n=21) and low-socially-anxious (LSA) individuals (n=22), based on their score on the Liebowitz social anxiety scale. A computerized task was used to assess the ability to judge first and second order affective vs. cognitive mental state attributions.”

Remarkably, the scientists found that a large portion of people with social anxiety disorder are gifted empaths — people whose right-brains are operating significantly above normal levels and are able to perceive the physical sensitivities, spiritual urges, motivations, and intentions of other people around them (see Dr. Jill Bolte Taylor’s TED Talk below for a powerful explanation of this ability). The team’s conclusion reads: “Results support the hypothesis that high-socially-anxious individuals demonstrate a unique profile of social-cognitive abilities with elevated cognitive empathy tendencies and high accuracy in affective mental state attributions.” To understand more about the traits of an empath you can CLICK HERE. And to see if you align with the 22 most common traits of an empath CLICK HERE.

Empaths who have fully embraced their abilities are able to function on a purely intuition-based level. As Steve Jobs once said, “[Intuition] is more powerful than intellect,” and in keeping with this appreciation, writer Carolyn Gregoire recently penned a fascinating feature entitled “10 Things Highly Intuitive People Do Differently” and you can read it in full by visiting HuffingtonPost.com. And to learn why Western medicine may be misinterpreting mental illness at large, be sure to read the fascinating account of Malidoma Patrice Somé, Ph.D. — a shaman and a Western-trained doctor. “In the shamanic view, mental illness signals the birth of a healer, explains Malidoma Patrice Somé. Thus, mental disorders are spiritual emergencies, spiritual crises, and need to be regarded as such to aid the healer in being born.” You can read the full story by reading “What A Shaman Sees In A Mental Hospital”. For more great stories about the human brain be sure to visit The Human Brain on FEELguide. (Sources: Business Insider, The Mind Unleashed, Huffington Post, photo courtesy of My Science Academy).

January 23, 2016 01:13 AM

January 22, 2016

Tim KentLearning more about EFI systems

I've been interested in the inner workings of cars and engines for a few years now, in particular engine management systems. One project that has my attention is MegaSquirt which is an Electronic Fuel Injection controller intended to teach people how EFI works. I've decided to get in on the action, but since my soldering skills are a bit rusty I'm starting with the MegaStim kit (much easier to build) which is a small diagnostics device used to test out MegaSquirt hardware. I purchased the kit from DIYAutoTune.com and I was impressed with the service, I'll get the rest of my MegaSquirt gear from them for sure. Even if I don't end up using MegaSquirt for anything, I'm sure it will still be worth it for the experience.

January 22, 2016 02:12 PM

Tim KentDealing with Japanese imports

A number of my friends have recently purchased Japanese cars. One of the minor issues that is often overlooked is the lack of ability to tune into any FM radio stations above 90MHz. Since I was working on one of these said cars this afternoon I thought it would be a good time to mention this problem. Among other things, be prepared to replace the stereo if you import!

Even into taking these things into consideration, Japanese imports are still great value! You get way more car for your dollar.

January 22, 2016 02:12 PM

Tim KentFight against dirty wheels

I don't know what it is with European cars but most of them come standard with brake pads that love to totally cover the wheels with carbon. I've attached a picture of what my wheels look like when it's been a couple of weeks since the last wash. Next service I'm supplying a set of EBC Greenstuff brake pads which should hopefully put an end to this problem, I'll keep you posted.

January 22, 2016 02:12 PM

Tim KentMoving from VMware Server to ESXi

At home I'm currently using VMware Server with Windows 2003 as the host OS. In addition to running 5 guest operating systems, the host OS performs the following tasks:

Lately I've been reading up on VMware ESXi and it appears as though my existing hardware is going to work, however I'm having a hard time deciding if the extra efficiency is worth the hassle. From what I've read I will have to find a different way to perform backups since local USB devices aren't supported, in addition I will have to provision a VM to perform the NAT and routing duties. On the other hand the I/O struggles at times with VMware Server so the extra performance and stability from ESXi would be welcomed; I've had VMware Server's NAT implementation crash twice during 18 months of use.

If anyone out there has made the move, I'd love to hear their experiences and feedback!

January 22, 2016 02:11 PM

Tim KentKilling the registration prompt in RawShooter Essentials 2006

I'm still using the final version of RawShooter Essentials as it supports my SLR's RAW format (Adobe have now bought out Pixmantec, so this is no longer being updated or supplied by them; it is only available from other sources such as download.com). So, if you've managed to acquire it you will find that whenever RawShooter Essentials is freshly installed it will prompt you to register each time you start the program. As Adobe have shut down Pixmantec's servers the registration will fail. I compared the Windows registry from a fresh install with an existing (registered) copy and found that the registered copy had some extra registry entries. With these entries you should be able to kill the annoying prompt:

Windows Registry Editor Version 5.00

[HKEY_LOCAL_MACHINE\SOFTWARE\Classes\CLSID\{0F540988-8449-4C30-921E-74BCCEA70535}]

[HKEY_LOCAL_MACHINE\SOFTWARE\Classes\CLSID\{0F540988-8449-4C30-921E-74BCCEA70535}\ProgId]
Save the above contents to a file with the extention ".REG" and double-click it to install the entries. You may have to fix the lines if they wrap. You will now find that next time you open the program the registration prompt will be gone.

January 22, 2016 02:11 PM

Tim KentArchiving files from my Topfield PVR

I've had a Topfield PVR for quite a few years now. The unit is great, I can't fault it really. Until recently I did however have one ongoing problem; I kept running out of space! To help combat the space problem I upgraded to a Samsung 400GB drive but this was only a short term band-aid.

The next solution was commissioning a Linksys NSLU2 running uNSLUng and ftpd-topfield to allow FTP access to the unit (my computer isn't anywhere near the TV and the Topfield only has a USB port). So the space problem on the Topfield was fixed, but I had loads of transport stream files sitting on my computer. It was just too expensive (time-wise) to edit out all the ads, convert to MPEG-2 and burn to DVD or DivX. So last weekend I scripted it:
Seems to work quite nicely, the ad detection works fairly well but it's not 100% perfect. One thing I had to do to get comskip working was rename the file extension from REC to TS.

The whole thing was fairly trivial after reading the CLI documentation for each program, but if you need a hand feel free to contact me.

January 22, 2016 02:11 PM

Tim KentOther efforts to stop Spam

Watching the logs closely after my Greylisting install made me notice just how many attempts are being made to deliver junk to my mail server. I thought I'd add some a few more checks to Postfix so the messages don't even make it to the Greylisting stage. The most effective ones are; requiring a fully qualified HELO string (you'd be surprised how many Spammers just use HELO localhost), and checking that the sender exists before accepting the message. This is done with reject_non_fqdn_hostname under smtpd_recipient_restrictions, and reject_unverified_sender under smtpd_sender_restrictions respectively. A good guide on how to set up sender verification can be found on the Postfix website.

January 22, 2016 02:11 PM

Tim KentDNS resolution on iPhone

I've been playing with a few iPhones lately and have had trouble getting WiFi working through our proxy. After much hair pulling the problem turns out to be a feature in the iPhone DNS resolver that refuses to look up any hostname ending in ".local". This also appears to be a problem on Mac OS X:

http://support.apple.com/kb/HT2385?viewlocale=en_US

With OS X you can add "local" to the Search Domains field and disable this behaviour, unfortunately it doesn't work for the iPhone.

January 22, 2016 02:11 PM

Tim KentData destruction

After cleaning my home office I was left with some old hard drives to dispose of, this got me thinking about data destruction. In the past I cleared my drives with a couple of passes of random data using dd, but is this thorough enough?

This time round I have used a free bootable CD called CopyWipe (great utility, BootIt NG is also worth a mention). Each drive was given 5 passes, and then taken to with a hammer just to be sure. I've linked a picture to the "after" shot.

I can see data destruction being a larger problem as time goes on. I'd be interested to know the techniques others use for this problem.

January 22, 2016 02:11 PM

Tim KentInteresting Microsoft disclaimer

Hotmail's disclaimer caught my attention whilst I was troubleshooting an Internet connection using netcat. Looks like they're either hiring the Mafia to do their dirty work, or taking matters into their own hands:
Violations will result in use of equipment located in California and other states.
This disclaimer is displayed when you first connect to the SMTP port of mail.hotmail.com. I have no idea what kind of torture the disclaimer implies but I'd rather not find out!

January 22, 2016 02:11 PM

Tim KentSick of Spam

For some reason, Spammers love my e-mail address. I'm guessing it's one of my posts to Usenet where they have harvested my particular address. I've been looking into Greylisting recently to help combat the situation, specifically Postgrey (I use the Postfix MTA).

Since setting this up, I have yet to receive an unsolicited message! That's a big difference considering I was getting around 50 junk messages a day beforehand. Installation was very simple under Debian as it has already been packaged, so you just need to apt-get it and add a check_policy_service line to your smtpd_recipient_restrictions in main.cf.

January 22, 2016 02:11 PM

Tim KentVoIP headaches

I've recently signed up with PennyTel to get better prices on phone calls. This was after two relatives of mine both recommended PennyTel and said how easy the whole thing was to set up when using a Linksys SPA-3102.

OK, so I signed up and purchased the Linksys device. I set the networking stuff through the phone then followed the guide on the PennyTel website to configure SIP (VoIP connectivity stuff). I was feeling pretty good about the whole thing, that is until I made the first phone call!

I thought I'd try to impress a mate so I called up one of my tech savvy friends and told them I was using VoIP to talk to them. The quality sounded quite good, then after 32 seconds the call dropped out! I had called a mobile so I thought it may just be a glitch. The next two calls resulted in the same drop out after 32 seconds. By this stage my friend thought it was quite amusing that my new phone service was so unreliable after I had been boasting about the cheap call rates!

After hours of Googling and messages back and forth between PennyTel support, I still hadn't managed to avoid the call drop out, or another intermittent problem where the SIP registration was randomly failing. The settings looked fine, and PennyTel didn't appear to have any outages as I tested things with a soft phone from another DSL connection. I was really regretting the whole thing, and getting pretty pissed off. I had a think about the whole scenario, and the only thing I hadn't eliminated was my DrayTek Vigor 2600We ADSL router. I had already set the port forwards required for the Linksys SPA (UDP 5060-5061 and 16384-16482) so thought nothing more of router configuration. As a last resort, I searched the Internet for people running VoIP through their DrayTek to see if any incompatibilities existed. I came across a site with someone experiencing my exact problem, and they had a workaround! It appears that the 2600We has a SIP application layer proxy enabled by default. This really confuses things on the Linksys and has to be disabled. After telnetting to the device and entering the following command, things were working great:

sys sip_alg 0

Note that you may need to upgrade your DrayTek firmware for this command to be available.

After the changes I made some calls and no longer got disconnected after 32 seconds! Woohoo! At the end of the day I'm glad I chose VoIP for the cost savings, even though it caused me grief the first few days.

Update: One other setting I have found needed a bit of tweaking was the dial plan. Here is my current Brisbane dial plan for an example:

(000S0<:@gw0>|<:07>[3-5]xxxxxxxS0|0[23478]xxxxxxxxS0|1[38]xx.<:@gw0>|19xx.!|xx.)

January 22, 2016 02:11 PM

Tim KentKeeping up with multiple Blogs

After a quick search for RSS aggregators, I found Google Reader to do exactly what I want. It saves so much of my time monitoring RSS feeds from a central location, and being web-based means I can access it from anywhere. In case you are wondering, I don't work for Google!

Update: Since publishing this, I've had a few people tell me how much Google Reader sucks compared to the alternatives out there. Greg Black recommended Bloglines to me, and after giving it a go I must say that it's a much better solution. Looks like I'll stick with this one for a while.

January 22, 2016 02:11 PM

Tim KentBlackBerry MDS proxy pain

I'm just having a rant about MDS SSL connections through a proxy. Non-SSL traffic will work fine, however SSL traffic appears to go direct even when proxy settings have been defined as per KB11028. My regular expression matches the addresses fine.

Surely people out there want/need to proxy all their BES MDS traffic?

January 22, 2016 02:11 PM

Tim KentA potential backup solution for small sites running VMware ESXi

Today, external consumer USB3 and/or eSATA drives can be a great low cost alternative to tape. For most small outfits, they fulfil the speed and capacity requirements for nightly backups. I use the same rotation scheme with these drives as I did tape with great success.

Unfortunately these drives can't easily be utilised by those running virtualised servers on top of ESXi. VMware offers SCSI pass-through as a supported option, however the tape drives and media are quite expensive by comparison.

VMware offered a glimpse of hope with their USB pass-through introduced in ESXi 4.1, but it proved to have extremely poor throughput (~7MB/sec) so can realistically only shift a couple of hundred GB at most per night.

I have trialled some USB over IP devices; the best of these can lift the throughput from ~7MB/sec to ~25MB/sec, but the drivers can be problematic and are often only available for Windows platforms.

This got me thinking about presenting a USB3 controller via ESXi's VMDirectPath I/O feature.

VMDirectPath I/O requires a CPU and motherboard capable of Intel Virtualization Technology for Directed I/O (VT-d) or AMD IP Virtualization Technology (IOMMU). It also requires that your target VM is at a hardware level of 7 or greater. A full list of requirements can be found at http://kb.vmware.com/kb/1010789.

I tested pass-through on a card with the NEC/Renesas uPD720200A chipset (Lindy part # 51122) running firmware 4015. The test VM runs Windows Server 2003R2 with the Renesas 2.1.28.1 driver. I had to configure the VM with pciPassthru0.msiEnabled = "FALSE" as per http://www.vmware.com/pdf/vsp_4_vmdirectpath_host.pdf or the device would show up with a yellow bang in Device Manager and would not function.

The final result - over 80MB/sec throughput (both read and write) from a Seagate 2.5" USB3 drive!

January 22, 2016 02:10 PM

January 12, 2016

Ben FowlerJava gets monads

Something which came out of a mentoring session at work today: the guy leading the discussion used Optional<T> everywhere, but subsequently handled them in a fairly raw, naive way. https://dzone.com/articles/whats-wrong-java-8-part-iv It got me thinking: where's Java's for-comprehension syntax and why is it missing? Well, it would seem that the Stream APIs go much of the way to filling this

January 12, 2016 05:55 PM

January 06, 2016

Anthony TownsBitcoin Fees in History

Prior to Christmas, Rusty did an interesting post on bitcoin fees which I thought warranted more investigation. My first go involved some python parsing of bitcoin-cli results; which was slow, and as it turned out inaccurate — bitcoin-cli returns figures denominated in bitcoin with 8 digits after the decimal point, and python happily rounds that off, making me think a bunch of transactions that paid 0.0001 BTC in fees were paying 0.00009999 BTC in fees. Embarrassing. Anyway, switching to bitcoin-iterate and working in satoshis instead of bitcoin just as Rusty did was a massive improvement.

From a miner’s perspective (ie, the people who run the computers that make bitcoin secure), fees are largely irrelevant — they’re receiving around $11000 USD every ten minutes in inflation subsidy, versus around $80 USD in fees. If that dropped to zero, it really wouldn’t make a difference. However, in around six months the inflation subsidy will halve to 12.5 BTC; which, if the value of bitcoin doesn’t rise enough to compensate, may mean miners will start looking to turn fee income into real money — earning $5500 in subsidy plus $800 from fees could be a plausible scenario, eg (though even that doesn’t seem likely any time soon).

Even so, miners don’t ignore fees entirely even now — they use fees to choose how to fill up about 95% of each block (with the other 5% filled up more or less according to how old the bitcoins being spent are). In theory, that’s the economically rational thing to do, and if the theory pans out, miners will keep doing that when they start trying to get real income from fees rather than relying almost entirely on the inflation subsidy. There’s one caveat though: since different transactions are different sizes, fees are divided by the transaction size to give the fee-per-kilobyte before being compared. If you graph the fee paid by each kB in a block you thus get a fairly standard sort of result — here’s a graph of a block from a year ago, with the first 50kB (the priority area) highlighted:

block

You can see a clear overarching trend where the fee rate starts off high and gradually decreases, with two exceptions: first, the first 50kB (shaded in green) has much lower fees due to mining by priority; and second, there are frequent short spikes of high fees, which are likely produced by high fee transactions that spend the coins mined in the preceeding transaction — ie, if they had been put any earlier in the block, they would have been invalid. Equally, compared to the priority of the first 50kB of transactions, the the remaining almost 700kB contributes very little in terms of priority.

But, as it turns out, bitcoin wallet software often pretty much just tends to pick a particular fee and use it for all transactions no matter the size:

block-raw-fee

From the left hand graph you can see that, a year ago, wallet software was mostly paying about 10000 satoshi in fees, with a significant minority paying 50000 satoshi in fees — but since those were at the end of the block, which was ordered by satoshis per byte, those transactions were much bigger, so that their fee/kB was lower. This seems to be due to some shady maths: while the straightforward way of doing things would be to have a per-byte fee and multiply that by the transaction’s size in bytes, eg 10 satoshis/byte * 233 bytes gives 2330 satoshi fee; things are done in kilobytes instead, and a rounding mistake occurs, so rather than calculating 10000 satoshis/kilobyte * 0.233 kilobytes, the 0.233 is rounded up to 1kB first, and the result is just 10000 satoshi. The second graph reverses the maths to work out what the fee/kilobyte (or part thereof) figure would have been if this formula was used, and for this particular block, pretty much all the transactions look how you’d expect if exactly that formula was used.

As a reality check, 1 BTC was trading at about $210 USD at that time, so 10000 satoshi was worth about 2.1c at the time; the most expensive transaction in that block, which goes off the scale I’ve used, spent 240000 satoshi in fees, which cost about 50c.

Based on this understanding, we can look back through time to see how this has evolved — and in particular, if this formula and a few common fee levels explain most transactions. And it turns out that they do:

stdfees

The first graph is essentially the raw data — how many of each sort of fee made it through per day; but it’s not very helpful because bitcoin’s grown substantially. Hence the second graph, which just uses the smoothed data and provides the values in percentage terms stacked one on top of the other. That way the coloured area lets you do a rough visual comparison of the proportion of transactions at each “standard” fee level.

In fact, you can break up that graph into a handful of phases where there is a fairly clear and sudden state change between each phase, while the distribution of fees used for transactions during that phase stays relatively stable:

stdfee-epochs

That is:

  1. in the first phase, up until about July 2011, fees were just getting introduced and most people paid nothing; fees began at 1,000,000 satoshi (0.01 BTC) (v 0.3.21) before setting on a fee level of 50000 satoshi per transaction (0.3.23).
  2. in the second phase, up until about May 2012, maybe 40% of transactions paid 50000 satoshi per transaction, and almost everyone else didn’t pay anything
  3. in the third phase, up until about November 2012, close to 80% of transactions paid 50000 satoshi per transaction, with free transactions falling to about 20%.
  4. in the fourth phase, up until July 2013, free transactions continue to drop, however fee paying transactions split about half and half between paying 50000 satoshi and 100000 satoshi. It looks to me like there was an option somewhere to double the default fee in order to get confirmed faster (which also explains the 20000 satoshi fees in future phases)
  5. in the fifth phase, up until November 2013, the 100k satoshi fees started dropping off, and 10k satoshi fees started taking over (v 0.8.3)
  6. in the sixth phase, the year up to November 2014, transactions paying fees of 50k and 100k and free transactions pretty much disappeared, leaving 75% of transactions paying 10k satoshi, and maybe 15% or 20% of transactions paying double that at 20k satoshi.
  7. in the seventh phase, up until July 2015, pretty much everyone using standard fees had settled on 10k satoshi, but an increasing number of transactions started using non-standard fees, presumably variably chosen based on market conditions (v 0.10.0)
  8. in the eighth phase, up until now, things go a bit haywire. What I think happened is the “stress tests” in July and September caused the number of transactions with variable fees to spike substantially, which caused some delays and a lot of panic, and that in turn caused people to switch from 10k to higher fees (including 20k), as well as adopt variable fee estimation policies. However over time, it looks like the proportion of 10k transactions has crept back up, presumably as people remove the higher fees they’d set by hand during the stress tests.

Okay, apparently that was part one. The next part will take a closer look at the behaviour of transactions paying non-standard fees over the past year, in particular to see if there’s any responsiveness to market conditions — ie prices rising when there’s contention, or dropping when there’s not.

January 06, 2016 03:51 PM

December 30, 2015

Ben FowlerUsing Gravizo to embed UML diagrams and graphs into GitHub-flavoured Markdown

This is insane, but it works: Gravizo is a website that lets you insert URLs with embedded diagrams in them, into your HTML for Markdown documents, and have them render, e.g. <img src='http://g.gravizo.com/g? digraph G { main -> parse -> execute; main -> init; main -> cleanup; execute -> make_string; execute -> printf init -> make_string; main -> printf; execute ->

December 30, 2015 04:06 PM

December 29, 2015

Ben FowlerCuring forking annoyances in GitHub by scripting the Public REST API

I had a minor annoyance today, where I had to find out which of our thirty-odd developers touched a particular file in our source tree, in order to fix a problem blocking our testers from working. We use GitHub Enterprise, so we are using a forking branching model to work. This means that work could lurk out of sight on other peoples' forks of the upstream repository. This sort of thing calls

December 29, 2015 08:36 PM

December 17, 2015

Ben Martin17 Segments are the new Red.

While 7 Segment displays are quite common, the slightly more complex cousin the 17 segment display allows you to show the A-Z range from English and also some additional symbols due to the extra segments.

The unfortunate part of the above kit which I got from Akizukidenshi is that the panel behind the 17 segger effectively treats the display as a 7 segger. So you get some large 7 segment digits but can never display an "A" for example. Although this suits the clock that the kit builds just fine, there is no way I could abide wasting such a nice display by not being able to display text. With the esp8266 and other wifethernet solutions around at a low price point it is handy to be able to display the wind speed, target "feels like" temperature etc as well as just the time.

With that in mind I have 3 of these 17 seggers breadboarded with a two transistor highside and custom lowside using an mxp23017 pin muxer driving two 2803 current sinks which are attached using 8 up 330 ohm resistors in IC blocks. This lowside is very useful because with a little care it can be setup on a compact piece of stripboard. All the controlling MCU needs is I2C and it can switch all the cathodes just fine.

While experimenting I found some nice highside driver ICs. I now have custom PCB on their way which can each drive 2 displays and can chain left and right to allow an arbitrary number of displays to be connected into a single line display. More info an photos to follow, I just noticed that I hadn't blogged in a while so thought I'd drop this terse post up knowing that more pictures and video are to come. I also have the digits changing through a fade effect. It is much less distracting to go from time to temp and back if you don't jump the digits in one go.

December 17, 2015 11:42 AM

December 13, 2015

Clinton RoyYOW Brisbane 2015

The YOW conference is very kind to local meetup organisers, I was lucky enough to be offered a ticket in return for introducing a couple of sessions.

Monday

Keynote: Adrian Cockcroft

Complexity, understanding, composition and abstraction.

Past, Present and Future of Java: Georges Saab

Some of the new fp/multi core stuff slowly coming down the pipeline. I’ve always had high expectations for Java and the surrounding environment, but every time I’ve used it I’ve been very disappointed. There’s a lot to be said for backwards compatibility, but not at the cost of destroying all the good will your development community has. The changes portrayed in this talk are quite interesting.

Play in C#: Mads Torgersen

This was a highlight of the conference for me. The Roslyn project basically inverted the Microsoft compiler from a sink to a filter which lets it be hooked up directly to the IDE. The live example was adding a linter to the IDE to complain about blocks of code not in brace extensions, complete with one click fixup. It was all very impressive.

Writing a writer: Richard P. Gabriel

Generating poems that get judged to be written by humans, all in lisp of course.

Keynote: Don Reinertsen

This was a very interesting discussion on the natural reaction in an uncertain world: making systems robust. At the very best, the most robust system (robustest? :) will be able to handle the most chaotic world and bring system performance back to normal. This talk asks us to think about the notion of a system that can actually improve in a chaotic world. The theoretic model is based on the financial idea of increasing risk implying increasing returns.

The Future of Software Engineering: Glenn Vanderburg

This was a very interesting talk on the nature of engineering, and how software engineering fits into the discipline. A highlight.

The Miracle of Generators: Bodil Stokke

This was an FP talk, I’m not a fan of bait-and-switch talks.

Tuesday:

NASA Keynote: Anita Sengupta and Kamal Oudrhiri

It’s interesting to be in a room full of engineers being exposed to different engineering requirements.

Agile is Dead: Dave Thomas

A great simplification of the underlying ideas of how to have agility.

Sometimes the Questions are Complicated, but the Answers are Simple: Indu Alagarsamy

A highlight of the conference overall, a talk about a healthy family culture butting up against backwards societal culture.

Keynote: Kathleen Fisher

Formal processes work, but we’re decades off being able to use them for day to day work.

Always Keep a Benchmark in your Back Pocket: Simon Garland

Some rules to keep in mind around designing  benchmark, plus the idea of always doing benchmarking as a way of defending development work to management keen on outsourcing.

Transcript: Jonathan Edwards

One of the talks I chaired.  A very interesting document and form based programming language for non-programmers to use, in the style of hypercard.

The Mother of all Programming Languages Demos: Sean McDirmid

One of the talks I chaired. More interesting ideas coming out of Microsoft. This was heavily based on physical interfaces, I struggled to think how it would apply to regular programming.

 

 

 

 

 

 

 

 

 


Filed under: Uncategorized

December 13, 2015 07:31 AM

December 08, 2015

Adrian SuttonAlert Dialogs Do Not Appear When Using WebDriverBackedSeleniu

With Selenium 1, JavaScript alert and confirmation dialogs were intercepted by the Selenium JavaScript library so they never appeared on-screen and were accessed using selenium.isAlertPresent(), selenium.isConfirmationPresent(), selenium.chooseOkOnNextConfirmation() and similar methods.

With Selenium 2 aka WebDriver, the dialogs do appear on screen and you access them with webDriver.switchTo().alert() which returns an Alert instance for further interactions.

However, when you use WebDriverBackedSelenium to mix Selenium 1 and WebDriver APIs – for example, during migrations from one to the other – the alerts don’t appear on screen and webDriver.switchTo().alert() always throws a NoAlertPresentException. This makes migration problematic because selenium.chooseOkOnNextConfirmation() doesn’t have any effect if the next confirmation is triggered by an action performed directly through Web Driver.

The solution is to avoid WebDriverBackedSelenium entirely and instead create a WebDriverCommandProcessor, configure it to not emulate dialogs and pass it to DefaultSelenium.

public Selenium createSelenium(
  final WebDriver webDriver,
  final String baseUrl)
{
  WebDriverCommandProcessor commandProcessor =
    new WebDriverCommandProcessor(baseUrl, () -> webDriver);
  commandProcessor.setEnableAlertOverrides(false);
  processor.start();
  return new DefaultSelenium(commandProcessor);
}

The key method call here is setEnableAlertOverride(false) which disables the alert emulation.  Note that the Selenium 1 methods for interacting with dialogs will no longer work – you have to switch all alert handling over to WebDriver.

Also note the subtle but important use of a closure to provide a Supplier<WebDriver> instead of just passing the webDriver. The overloaded constructor instance that takes a WebDriver immediately calls start() on the processor and setEnableAlertOverrides must be called before start().

It’s unfortunate, but understandable, that Selenium 1 and WebDriver style alert interactions can’t be mixed as that would make migration a lot easier. At least with this approach you can convert your alert interactions over in one batch then convert all the other interactions incrementally.

December 08, 2015 03:18 AM

December 05, 2015

Ben Fowlerzsh

I wanted a more powerful shell for work, so I used fish for a while. However, I found it jarring, since while they have a nice setup out of the box, like syntax highlighting; it's not actually bash-compatible which is annoying when you go to do something nontrivial on the command-line. Enter zsh. I had been giving it a wide berth because of it's notoriety for complexity, but I think I've

December 05, 2015 10:11 PM

Ben FowlerWriting a Spotify client in Emacs Lisp (and Helm) in 16 minutes

This is impressive work. There are quite a few tricks here, that I could repurpose to the kind of things I'm doing at work. Demonstrates the power of an editor which is so easily scriptable.

December 05, 2015 06:46 PM

Ben FowlerEditor wars

In college, I used Emacs, at Martin's urging, before settling on Vim -- as a long-time Unix-head, I've variously owned many exotic machines, which all reliably came with Vi. 20 years of brain-damage has burnt the Vi keybindings into my fingers. That said, I have tied (and failed) to learn Vimscript, and have written a bit of Common Lisp and Scheme for a bit of fun (including a tiny Scheme clone

December 05, 2015 06:26 PM

December 04, 2015

Adrian SuttonTesting@LMAX – Introducing ElementSpecification

Today LMAX Exchange has released ElementSpecification, a very small library we built to make working with selectors in selenium/WebDriver tests easier. It has three main aims:

Essentially, we use ElementSpecification anywhere that we would have written CSS or XPath selectors by hand. ElementSpecification will automatically select the most appropriate format to use – choosing between a simple ID selector, CSS or XPath.

Making selectors easier to understand doesn’t mean making locators shorter – CSS is already a very terse language. We actually want to use more characters to express our intent so that future developers can read the specification without having to decode CSS. For example, the CSS:

#data-table tr[data-id='78'] .name

becomes:

anElementWithId("data-table")
.thatContainsA("tr").withAttributeValue("data-id", "78")
.thatContainsAnElementWithClass("name")

Much longer, but if you were to read the CSS selector to yourself, it would come out a lot like the ElementSpecification syntax. That allows you to stay focussed on what the test is doing instead of pausing to decode the CSS. It’s also reduces the likelihood of misreading a vital character and misunderstanding the selector.

With ElementSpecification essentially acting as an adapter layer between the test author and the actual CSS, it’s also able to avoid some common intermittency pitfalls. In fact, the reason ElementSpecification was first built was because really smart people kept attempting to locate an element with a classname using:

//*[contains(@class, 'valid')]

which looks ok, but incorrectly also matches an element with the class ‘invalid’. Requiring the class attribute to exactly match ‘valid’ is too brittle because it will fail if an additional class is added to the element. Instead, ElementSpecification would generate:

contains(concat(' ', @class, ' '), ' valid ')

which is decidedly unpleasant to have to write by hand.

The biggest benefit we’ve seen from ElementSpecification though is that fact that it has a deliberately limited set of abilities. You can only descend down the DOM tree, never back up and never across to siblings. That makes selectors far easier to understand and avoids a lot of unintended coupling between the tests and incidental properties of the DOM. Sometimes it means augmenting the DOM to make it more semantic – things like adding a “data-id” attribute to rows as in the example above. It’s surprisingly rare how often we need to do that and surprising how useful those extra semantics wind up being for a whole variety of reasons anyway.

December 04, 2015 03:26 AM

November 30, 2015

Ben FowlerHipChat: fix typos of the last line you sent in a chat(!)

I discovered a neat trick accidentally today: If you're chatting with a user or group on HipChat, and you make a typo, you can fix the last line you sent: E.g. The quick bronw fox jumped over the lazy dog ---------------------------------------------------------------- s/bronw/brown/ ... will result in the last (sent) line changing to: The quick brown fox jumped over the lazy dog --------

November 30, 2015 08:26 PM

November 29, 2015

Ben FowlerMaths and physics

One of the frustrations of doing physics at Birkbeck and then the Open University, is that they modularize the course in such a way, that the maths is split from the physics so they can present to a wider audience, purely for commercial and practical reasons. This forces the physics to be taught without the absolutely necessary maths it needs to do it proper justice.  I'd argue that the physics

November 29, 2015 10:59 PM

Ben FowlerSimple disk profiling under Linux

I have a NAS based on an ancient HP MediaSmart, that's had Windows Home Server wiped off it and replaced with Debian. It has a nice compact case, where the front is taken up with four hotswap cradles for 3.5" hard drives. Concerned that the disks were too slow, I asked around the HUMBUG mailing lists, and Paul Gearon replied, suggesting that I check out hdparm(1) and lsscsi(1) to see if

November 29, 2015 12:08 PM

November 28, 2015

Ben FowlerClean Code

Being through a redundancy and having now found myself in a new job where development practices are quite mature, I've been thinking much harder about the way I practice software development. I had noted that the development culture of the office is heavily influenced by the writings of Joel Spolsky and Bob Martin, and I've noticed a few dog-eared copies of Clean Code floating around. So my

November 28, 2015 01:32 PM

November 26, 2015

Ben FowlerRESTful API design

My colleague at work passed this onto me as part of some work we've kicked off. This is the best set of guidelines I've seen for doing JSON RESTful web services so far. Vinay Sahni: Best Practices for a Pragmatic RESTful API I obviously have been out of the loop, because the Link: header (RFC 5988) is news to me. As is the dirty rotten hack, which is JSONP.

November 26, 2015 02:16 PM

November 21, 2015

Ben FowlerUsing a Mac for serious development

One of the nicest things about the new job, is that we have been all given Macs for development. I have quite a comfortable setup: a maxed-out Mac Mini, with two 24" widescreen monitors. Despite being a Unix-head since I first had the chance to get my hands on Unix-powered machines, I've always found myself running Windows for development. Now actually finding myself having to code for a living

November 21, 2015 12:23 AM

November 19, 2015

Ben FowlerMy new gig

I probably didn't mention that I've had a change of scenery. I'm now working in a new role back at Thomson Reuters, building some neat new web applications. The three-month sabbatical was nice, but it was high time to tackle new challenges. Here's a video demo of the new hotness: World-Check One. It's throughly modern, massively scalable, and releases every few days. The difference between this

November 19, 2015 10:12 PM

November 14, 2015

Ben FowlerGitflow in 2015: no longer recommended?

ThoughtWorks have just released their 2015 technology radar, and it looks like consensus is building against gitflow -- the elaborate branching model invented as a set of best practices for using Git in big application development, where big releases are the norm. We firmly believe that long-lived version-control branches harm valuable engineering practices such as continuous integration, and

November 14, 2015 02:07 PM

November 13, 2015

Daniel DevineCentOS 7 Courier Repo & Work on Self-hosted Email

I've created a repository with Courier RPM packages built for CentOS 7 x86_64. The packages are compiled and signed automatically in a toolchain I have automated - but no particular reseach, tweaks or testing has been done to ensure that the packages are suitable for deployment. The packages were created because I want to work on Ajenti-V's Mail plugin and support my platform of choice for servers (CentOS) which doesn't currently have Courier packages available through repos (at least none of any repute).

Read more…

November 13, 2015 11:42 AM

November 11, 2015

Ben FowlerNew server build

I have a mind to upgrade my dinky old home server to something with a bit more grunt. At the moment, I have a HP MediaSmart headless media server, which was wiped and reinstalled with Debian, and is fully populated with big WD green drives. However, this little machine, while doing its job, has problems. Being such an underpowered little machine, it won't run ZFS well (at all?), and without

November 11, 2015 04:04 PM

Ben FowlerNewbie mistake: "nofail" and removable disks

I lost half an hour today because of a silly mistake. When adding removable disks to /etc/fstab under Linux, don't forget the nofail option. It's rather annoying when you reboot the system and then can't get into the machine remotely because a removable disk is missing. It's even more or a ballache, if said system is completely headless and doesn't have a VGA or keyboard port default (or

November 11, 2015 02:10 PM

November 09, 2015

Ben FowlerLinus Torvalds and that notorious code review

This happened recently: Christ people. This is just sh*t.The conflict I get is due to stupid new gcc header file crap. But whatmakes me upset is that the crap is for completely bogus reasons.This is the old code in net/ipv6/ip6_output.c:mtu -= hlen + sizeof(struct frag_hdr);and this is the new "improved" code that uses fancy stuff that wantsmagical built-in compiler support and has silly

November 09, 2015 03:41 PM

November 08, 2015

Ben FowlerBackups

My home computing setup has a Mac laptop and a Linux-based file server. Apart from using a server-hosted Time Machine backup which is brittle and occasionally fails, I have no backups. Seeing as I'd hate to lose all my wedding photos, I decided to sort this out today. On the hardware side, I bought a hard drive docking station, and plugged it into the server. Then I bought myself a pair of

November 08, 2015 09:31 PM

November 04, 2015

Ben FowlerMy LaTeX workflow

Working on my first assignment for "S217: Physics: from classical to quantum". Since I've got time on my hands, I've typeset it using Latex using a nice template (the jhwhw problem-sheet style), matplotlib to generate graphs, and Inkscape for the diagrams. TeXStudio is also very nice, for the stuff it gives you out-of-the-box (that'd otherwise take me hours to configure in Vim, my usual text

November 04, 2015 11:22 PM

Ben FowlerSwing

Swing. Just tried coding a toy example today -- a crude UI to calculate my GPA and degree classification for the degree I'm doing. I remember being shown it by Martin Pool as a technology preview back when it was the New Hotness, back in 1998, if memory serves correctly. It got Old and Busted in record time. Too hard to use correctly, too easy to use incorrectly, verbose as hell. IntelliJ

November 04, 2015 11:14 PM

October 26, 2015

Ben MartinESP8266 and a few pins

The new Arduino 1.6.x IDE makes it fairly simple to use the ESP8266 modules. I have been meaning to play around with a some open window detectors for a while now. I notice two dedicated GPIO pins on the ESP8266, which is one more than I really need. So I threw in an led which turns on when the window is open. Nothing like local, direct feedback that the device has detected the state of affairs. The reed switch is attached on an interrupt so as soon as the magnet gets too far away the light shines.


I will probably fold and make the interrupt set a flag so that the main loop can perform an http GET to tell the server as soon as it knows when a state has changed.

Probably the main annoying thing I've still got is that during boot it seems the state of both the gpio pins matters. So if the reed switch is closed when you first spply power then the esp goes into some stall state.

It will be interesting to see how easy OTA firmware updates are for the device.

October 26, 2015 12:41 PM

October 15, 2015

Ben MartinTerry & the start of a video project.

I did a test video showing various parts of Terry the Robot while it was all switched off and talking about each bit as I moved around. Below are some videos of the robot with batteries a humming and a little movement. First up is a fairly dark room and a display of what things look like just using the lighting from the robot itself. All the blinking arduino LEDs, the panel, and the various EL and other lights.

<iframe allowfullscreen="" frameborder="0" height="315" src="https://www.youtube.com/embed/IyN-tevSD7A" width="560"></iframe>

The next video has a room light on and demonstrates some of the control of the robot and screen feedback.

<iframe allowfullscreen="" frameborder="0" height="315" src="https://www.youtube.com/embed/mA16HmQuZJI" width="560"></iframe>

I got some USB speakers too, but they turned out to be a tad too large to mount onto Terry. So I'll get some smaller ones and then Terry can talk to me letting me know what is on its, err, "mind". I guess as autonomy is ramped up it will be useful to know if Terry is planning to navigate around or has noticed that it has been marooned by a chair that a pesky human has moved.

The talk over video is below. I missed talking about the TPLink wifi APs and why there are two, and might be only one in the future. The short answer is that Terry might become a two part robot, with a base station only one wifi AP is needed on the robot itself.


<iframe allowfullscreen="" frameborder="0" height="315" src="https://www.youtube.com/embed/MGMIf4UHd4s" width="560"></iframe>

October 15, 2015 12:27 PM

October 10, 2015

Blue HackersWorld Mental Health Day 2015

On this year’s World Mental Health Day, some info from Lifeline and Mental Health Australia:

Mental Health Begins with Me

Did you know 70% of people with mental health issues don’t seek help? […] As a community we can encourage others to take care of their mental health by breaking down the barriers that stop people seeking help when they need it.

How can you help?

Make your mental health promise and share it today.  Encourage your friends and family to do the same and share their promises here or on social media using the hashtag #WMHD2015.

Here are some great tips and promises to make to yourself this 10/10 (October 10th):

  1. Sleep well
  2. Enjoy healthy food
  3. Plan and prioritize your day
  4. Tune into the music you love
  5. Cut down on bad food and booze
  6. Switch off your devices and tune out
  7. Hangout with people who make you feel good
  8. Join in, participate and connect
  9. Exercise your body and mind
  10. Seek advise and support when you need it

 

October 10, 2015 01:28 AM

October 08, 2015

Adrian SuttonUse More Magic Literals

In programming courses one of the first thing you’re taught is to avoid “magic literals” – numbers or strings that are hardcoded in the middle of an algorithm. The recommended solution is to extract them into a constant. Sometimes this is great advice, for example:

if (amount > 1000) {
  checkAdditionalAuthorization();
}

would be much more readable if we extracted a ADDITIONAL_AUTHORIZATION_THRESHOLD variable – primarily so the magic 1000 gets a name.

That’s not a hard and fast rule though.  For example:

value.replace(PATTERN_TO_REPLACE, token)

is dramatically less readable and maintainable than:

value.replace("%VALUE%", token)

Extracting a constant in this case just reduced the locality of the code, requiring someone reading the code to unnecessarily jump around the code to understand it.

 

My rule of thumb is that you should extract a constant only when:

Arbitrary tokens like %VALUE% above are generally unlikely to change – it’s an implementation choice – so I’d lean towards preserving the locality and not extracting a constant even when they’re used in multiple places.  The 1000 threshold for additional authorisation on the other hand is clearly a business rule and therefore likely to change so I’d go to great lengths to avoid duplicating it (and would consider making it a configuration option).

Obviously these are just rules of thumb so there will be plenty of cases where, because of the specific context, they should be broken.

October 08, 2015 09:40 PM


Last updated: May 03, 2016 06:00 PM. Contact Humbug Admin with problems.