Catalyst IT Limited  
Planet Catalyst

19 October 2017

Catalyst Blog


Cutting our AWS spend in half

by Alex Lawn

It often seems that, by default, our AWS monthly spends go in only one direction - up.

Our AWS cost reduction strategy has evolved over time, through the work we’ve done for our clients and their cloud platforms, and on our own AWS application stacks. There's a massive amount of devil in the detail. We're proud to share that thanks to this work, we've halved our monthly AWS bills over the last eight months. This comes without compromising the high availability stacks that we have engineered, or sacrificing any application performance.

It's been quite a journey, with lots of things being looked at in parallel. Here's how we did it.

S3 File systems

One of our common cloud-hosted applications is Moodle LMS (and its cousin, Totara). Usually these applications need a networked, shared file system to store application file assets. We were using a triple-AZ redundant GlusterFSFS server, each having a dedicated EBS volume storing an entire copy of the application data. On larger sites this meant a significant storage footprint - 3TB x 3 EBS SSD volumes, coming in at $1080 USD per month with additional costs for non-production environments and backups. We needed a better way.

Amazon S3

So, we developed an alternative file storage implementation for Moodle which means we moved the majority of the data files into S3 object storage. This code is available as a plugin here

Production site data now costs only $75 per month instead of over $1000. With some clever S3 bucket permissions, it’s also possible to allow non-production application instances access to the production bucket. This means we don’t have to duplicate data storage across prod and dev, test, uat and staging. This means cost savings and considerable operational conveniences.

Bandwidth to S3

After initially migrating most of our file storage into S3, we were surprised to find our costs went up not down. After some investigation, we saw 27TB of traffic from our EC2 instances was going via our VPC Nat Gateways at $0.059 per GB. This cost around $1500 USD! We solved this by enabling the VPC endpoint for S3, eliminating almost entirely our NAT Gateway traffic by making the s3 pathway zero rated. Result! And, another example of the curly nature of AWS cost optimisation.

Storage and EBS Volumes

AWS has introduced several new block volume types over the last year, including cold storage and throughput-optimised volumes, both of which are much cheaper than SSD-backed storage. In February 2017 Elastic Volumes was introduced, which allows us to change an EBS volume type and increase volume size without any fuss.

This lets us provision volumes closer to the size of the contents without requiring pre-provisioning for future growth, as it's so easy to grow volumes. It also lets us convert volumes to cheaper and slower types when we don’t need the disk IO throughput performance.

There are still several pitfalls to be aware of when working with Elastic volumes. After changing a volume size or type another change cannot be made for six hours. Cold storage and Throughput optimised volumes have a 500GB minimum size, and EBS volumes cannot be shrunk. Also, slower sc1 and st1 volumes are prone to exhausting the IO Burst credits. If this happens the disk IO will slow dramatically, and you will need to use a faster disk type. This can be a little slow to diagnose if you haven’t see it before and aren’t looking for it.

Carefully going through all our EBS volumes to optimise the size and type of each volume has resulted in savings of about $250 per month.

EBS Snapshots

AWS volume snapshots are a powerful tool for backing up production volumes easily. However, it can be difficult to calculate all the costs to a completely granular level and it’s very easy to take too many snapshots. In the Sydney region we are charged $0.055 per GB/month for unique blocks of data. More frequent snapshots of an EBS volume won’t necessarily blow our bill, as AWS only charge for the block level changes between each snapshot. However, in reality we have seen a lot of snapshot strategies where far too much data is being stored, even taking in to consideration that we pay only for blok deltas.

We found a few old EBS snapshots lying around that were easy to eliminate, saving about $800 per month. We have seen with our clients similar scenarios where 'backup' snapshots get forgotten. Further savings will come from reducing the volume of data in snapshots and the rate of change of our data snapshots.


EBS Snapshots to S3

Even with savings from eliminating older EBS snapshots, we were still spending over $2000 per month on EBS snapshots that are historical backups. Most of the data on these snapshots was unique leading to poor cost savings from shared data. S3 pricing is less than half the price of an EBS snapshot per gigabyte. So by using s3-parallel-put we were able to upload the contents of these snapshots into S3. Once the EBS snapshots were deleted, we saved us around $1000 per month when the increased S3 storage costs are taken into account. We expect this will improve over time as the bucket policies slowly move objects into Infrequent Access based storage and eventually Glacier.

This is another example of why object storage is often a much more cost-effective way to archive data, but it’s not always a simple like-for-like when moving from block storage snapshot models into object storage.

Cross VPC Bandwith

Bandwidth traffic from one AWS VPC to another using VPC peering incurs a cost. For one of our clusters, one set of webserver nodes were using an NFS mount in from another VPC, this resulted in a lot of cross-VPC traffic. Consolidating everything into a single VPC has eliminated the this traffic entirely. All the more reason to engage experienced network cloud engineers when architecting cloud stacks.

Cross Availability Zone traffic

As part of the High Availability architecture requirements of our application stacks, we build them across AWS Availability Zones (Azs). Traffic from one EC2 instance to another EC2 instance in the same availability zone is free, however when that traffic crosses to an external AZ  a cost is incurred. When things like application load balancing or constant replication cross AZs, high traffic may be triggered and the AWS spend can rise considerably.

Our primary network fileshare is a replicated GlusterFS volume with a node in each AZ. This is in line with the 'Architect for Failure' policy espoused by AWS Solutions Architects. GlusterFS has an obscure mount option available called read-subvolume-index. This mount option allows you to hint to GlusterFS that it should use the local Availability Zones node for read operations if it’s available. This single mount option has saved us around $1500 per month in network traffic that no longer needs to leave an Availability Zone.

The full mount option in /etc/fstab is

datanode-1:/sitedata /var/lib/sitedata Glusterfs defaults,_netdev,fetch-attempts=6,backupvolfile-server=datanode-2,xlator-option=*.read-subvolume-index=2 0 0

Here the read-subvolume-index needs to be different in each availability zone to match the correct GlusterFS server.

RDS Snapshots

Over the last six months RDS Snapshot costs have been slowly creeping up as the number and size of our RDS instances grow. RDS Snapshots are important as they are part of the database PITR (Point In Time Recovery) that AWS offers.

To reduce the costs here we removed legacy snapshots from long-dead databases, set non-production RDS instances to store snapshots for only seven days instead of 31, or disable entirely.

The cost savings here were minimal, but every bit counts.


Turning things off

AWS and other cloud infrastructure solutions give us the flexibility to launch infrastructure on demand. Their success is a testimony to this. And it’s a better world than in the past when we had a long procurement cycle in before we could deploy anything into a data centre.

During our month-long crusade to get AWS costs under control we identified the following services that were able to be turned off. Challenge any organisation to closely look at their own infrastructure and get a bit brutal.

  • 1x t2.medium EC2 instance in a region we don’t generally use, a relic from ages past load testing

  • 2x m4.large RDS instances not used in months that we spun up for testing purposes

  • 2x elastic cache nodes that could be consolidated into an existing one

  • 1x elastic search end point that never indexed a single document

  • Several hundred gigs of unused EBS volumes, somewhere an autoscale group didn’t have terminate EBS volumes on shutdown set

  • 4 unused ELB’s that were getting no traffic and DNS was no longer pointed to them

  • 1 VPN Endpoint that was no longer connected

  • 1x test autoscale group with 2x t2.micro machines that is no longer needed.

Individually, none of these items were particularly big or expensive, however when put together they all add up.

Direct Connect

Given a large portion of the traffic from our AWS account is to our office, a Direct Connect connection could potentially save us several hundred dollars a month. Direct Connect traffic is 1/3 the price of regular outgoing traffic with the overhead having to find an ISP with a direct connect offering, and paying for the direct connect port itself. We did the maths, and the potential savings aren't big enough to justify the time spent, but it's worth looking into.


Ice is a tool that was open-sourced by Netflix. It allows easy visualisations and examination of your detailed AWS billing, and lets you drill into the spending patterns on a daily basis. It uses the AWS detailed billing reports which get uploaded to an S3 bucket. There are a number of other as-a-service offerings such as CloudCheckr and AWS Trusted Advisor. But remember, these tools only help you decide what to do, they don’t do it for you!




Using Reserved instances

The final avenue for cost savings, once all of our infrastructure is of a suitable capacity, has been to invest in Reserved Instances. This means we're committing to a certain AWS usage, and receiving a discount for this. EC2 usage costs are only 25% of our bill with the rest coming from storage, traffic and other AWS services. Still, a 30% reduction is price is welcome.

One thing we have noticed when working with our clients to bring down their AWS bills via the purchase of reserved instances, is that they can have difficulty understanding the model, and they can be hesitant to commit to the large spend for a sizeable Reserved Instance purchase. Believe it or not, Sysadmins and DevOps engineers are not accountants. And there is always an element of risk when committing to a particular AWS service size.

We've discovered that reducing AWS spend is often not any one person's job. We are all acting as the owners of the business and any reduction in our AWS spend is good for our profits.

Google Cloud has just launched a region in Australia, and we think that the Google model for savings is far better. If you run a Google compute instance for an entire month, you incur a 30% saving. No forward planning required, and no risk of a long term reservation going unused should requirements change.

We've found reserving instances for EC2, RDS and Elastic cache initially resulted in a higher one-off charge, however over the coming months this should save us several thousand dollars.


The most important thing in managing your AWS spend, is paying attention. Get the knowledgeable people in a room to review the total spend. This will be hard the first few times, but you will almost certainly identify some potential savings.

There is no silver bullet to solve all your AWS cost problems. It’s most likely going to be a combination of vigilance and some pragmatic AWS usage policy. Also, don’t discount the value in engaging an AWS Partner to review your infrastructure usage.

There are no doubt more strategies than are mentioned in this blog, and we'll be looking at other things we can do to further reduce our bill over time.

Hope this is of use to you out there.

Alex Lawn is one of the founding members of the Team Cloud, Catalyst’s dedicated cloud consultancy initiative.

16 October 2017

Catalyst Blog


Security vulnerabilities: what you need to know

by Gavin Porter


Overnight, security weaknesses have been published in the WPA2 protocol used by most home and commercial Wi-Fi systems, including public Wi-Fi. The attack that exploits this weakness is called KRACK (Key Reinstallation AttaCK).

An attacker, within range of a victim, can use this technique to control the session key and read any information transmitted over the network that isn't protected by further encryption, such as HTTPS or a VPN.

Because the weaknesses are in the underlying protocol it works against all modern protected Wi-Fi networks and likely affects any device that can connect to Wi-Fi. It's a good idea to patch your devices immediately - install all available updates. Updates have been published for most operating systems and for some Wi-Fi hardware.

Android updates are expected to be published by Google in the November release. If you can't update your device, perhaps because you have an Android phone that is not actively supported by the manufacturer, you could use a VPN service to protect yourself.

Public/Private keys

A serious flaw has been identified in a code library used to generate public/private RSA key pairs. These are used in smartcards, security tokens, laptops, and other devices using cryptography chips made by Infineon Technologies. The flaw allows an attacker to determine the private key from the associated public key.

Such hardware typically uses proprietary software that is not easy to review or check. Catalyst's preference is to use open source software to generate cryptographic keys.

It's easy to test if a key is vulnerable to this type of attack - visit, or download the Python software for offline tests.

What to do next

We're already working to make sure all our clients are protected from these attacks.  If you're a Catalyst client and you have concerns, please contact your account manager.

For further information on KRACK, visit

For further information on the crypto key weakness, visit


12 October 2017

Catalyst Blog


Koha plus FOLIO: a solution for connected library services

by Chris Cormack and Brendan A. Gallagher


Catalyst and Bywater Solutions are both vendors for Koha library management system. We have worked together in the Koha community for 8 years. Both companies support a number of academic libraries (universities, polytechnics, TAFEs, colleges) and are investigating how FOLIO can be of benefit to our customers, with an initial focus on academic libraries.

This paper will explain how Koha and FOLIO can work together to benefit libraries. It will also set out specifications for some ways Koha could connect to FOLIO, and therefore enhance library services.


Koha Integrated Library Management System is the most popular open source software solution for library management. Originally developed in New Zealand in 1999, it is now used in over 15,000 libraries around the world. Over 300 people have now contributed as developers to the project which continues to grow in popularity. Openhub describes the project as having a well established, mature codebase maintained by a very large development team. The average Koha release includes contributions to the code from around 80 individuals.

The Koha project supports a growing number of API connectors and standards in order to interoperate with other library services. The Koha project already offers support for several systems libraries need to interoperate with, but there is always more that can be done. Library system users don’t expect to have to log into – or even look at – multiple interfaces or products any more. Libraries need to deliver on this expectation for their users, so Koha needs to deliver this functionality for libraries.

The potential gains of such fully integrated systems are significant. Library users and staff switching between software packages, electronic databases, and library management systems would be a thing of the past, with users enjoying the benefits of truly joined-up services and one single sign on to provide all the services that they need.

However, linking up the many and varied library systems, in the wide variety of library settings in which Koha is used, is a goal not without its challenges.

At its most basic level, the building of connectors for each piece of software for which libraries would like to see an integrated path with Koha requires a lot of work for developers to write, and for the Koha project to test, release and maintain. With different focuses and systems to integrate with, the efforts of individual institutions can easily be scattered leaving libraries working in silos without enough funding or people to support the end goal. Using open source software like Koha brings the freedom to achieve any goal where contributors have the skills and desire to do so, but these efforts can be more effective if everyone works together.

Further complicating matters, Koha is open source software but many of the other systems used for learning management - ePortfolios, financial systems and the like – are not. This means to connect them, buy-in and willingness to collaborate is needed from all the different providers of proprietary software. On top of a willingness to help, providers also need to contribute their time and money to a project unlikely to positively influence their bottom line.

So when we heard there might be a way for libraries and vendors to work together with a focus on integrating their digital systems, we wanted to know more. Enter FOLIO.

FOLIO is another open source project, started by content provider EBSCO Information Services, and with a growing community. FOLIO reduces the number of connectors required for creating joined up, integrated systems. Instead of the different systems requiring a multitude of separate connections to each other, each system simply connects to FOLIO. It still requires time and effort to build, but we can see some significant benefits to working with FOLIO in this way, particularly for academic libraries who need a way to interoperate with the wider context of support and services provided for students.

With FOLIO being Apache v2 licensed open source and protected by the Open Library Foundation, we know we we can make changes to it if we need to, which becomes important if the requirement is urgent, or another proprietary system has changed and we are have to work around it. We also know that with FOLIO, libraries are safe from the problem of software or a company being unsupported or sold off and becoming inaccessible, and we are also free to swap in and out the various systems that connect with FOLIO if they change.


FOLIO can be thought of as being a layered platform, with four distinct layers. At the bottom, the foundation is the System layer, this is where data is stored, indexed and logged. Built on top of this layer is the Message layer. The FOLIO project call this layer OKAPI, this can be thought of as the API gateway, or “bus”, with well defined structures for tenant context, permissions, authentication, and system APIs. This part of the project is the bit that really interests Koha developers. The OKAPI layer is what we would interact with to interoperate with other connected applications. Libraries can instruct the developer of any application to integrate and with and maintain connection with the OKAPI layer. The next layer is the application layer, this could be micro service level (e.g. a holds app) up to full application level (e.g. Koha). Finally there is the User Interface toolkit, which would be used by developers building new applications more than those connecting existing ones.

Diagram showing the above-mentioned four layers of the FOLIO platform

Librarians in Special Interest Groups (SIG) are currently detailing the features and integrations they would like to see in FOLIO. Design and development for several core workflow related apps is underway. Additionally, fund grants have been made available (from organizations such as EBSCO) to encourage development of innovative apps on the FOLIO platform. Leveraging these funds can attract Koha developers to connect Koha, FOLIO, and other library environments.


Following are some specifications for three ways that Koha could integrate with FOLIO, either by influencing the direction of FOLIO now, or contracting other developers to work on FOLIO after the production release next year.

Idea 1: Koha + Moodle via FOLIO

Most academic institutions will have both a library system and a learning management system. It would be great if these systems could talk to each other.

Imagine if you, a student, could place holds on items on your course reading without leaving the learning management system. If you write a review of a piece of literature in the learning management system, that could be added as a review on the bibliographic record in the library system. Koha and Moodle are arguably the two leading open source solutions for library management and learning management systems, so linking them together makes sense.

Taking just the holds example, the potential development or patch would be to make a Koha app (or apps) at the FOLIO app level. This would act as a translator app, working with the Koha Restful API on one side, and the OKAPI API on the other. To support holds, it would also need to talk to the holds API(s) in Koha.

Those API would potentially be:


To look up the user details.


To look up the usage details for the item to be placed on hold.

The workflow for placing a hold is, get the information about the user, get the information about the object, check a hold can be placed, place a hold.

GET /users/search/{email} would return a user matching the email

GET /users/{user_id} would return a user matching the id

POST /circulation/holds/place/{user_id}/{biblio_id} would return a hold id on success would return an error message on failure

The Koha holds app would accept messages from the OKAPI layer, parse them, call the appropriate Koha API and format the response to be handed back to the OKAPI layer.

Koha -> | FOLIO Koha Hold App -> Okapi -> Moodle App | -> Moodle

And back the other way. A Moodle app would also need to be created to execute this on the Moodle side.

The above example potentially provides a framework for thinking about how the FOLIO OKAPI layer can connect to a variety of other systems or resources.

Diagram showing integration of FOLIO, Koha and Moodle

Idea 2: Course Reserves App

Another potential development is a course reserves app. This could use the FOLIO user interface layer also.

This would form the basis of a collection of materials in Koha (class, material, course informations) harvested and displayed in app UI for FOLIO - while also adding in materials from electronic materials that are connected to FOLIO (discovery materials, ERM - anything stored in the FOLIO metadata collection).

The required API would potentially be:

GET /courses (Get ALL courses)

GET /courses/{course_id} (Get one course)

GET /courses/{course_id}/items (Given a course_id, get ALL the items)

GET /courses/{course_id}/items/{item_id} (Given a course_id, get the item identified by item_id)

PUT /courses/{course_id} (Update information of the identified course - This will be useful to ‘disable’ course) PUT /courses/{course_id}/course_item/{item_id} (Update an item in an identified course)

POST /courses/ (New Courses with all infos) POST /courses/{course_id}/course_reserve/ (New reserves is added to course)

DELETE /courses/{course_id} DELETE /courses/{course_id}/item/{item_id}

Diagram showing integration of FOLIO, Koha and ERM

Idea 3: Institution-wide reports

Koha allows full read access to the database for staff users, so it has a very powerful reporting engine, and any report that can be written can also be output as JSON, visualised, and marked public or private. We would not want to replace this reporting engine, but we might want to combine reports with reports from other applications used in the institution. It makes a lot of sense to use FOLIO to do this when multiple applications are connected.

The API we would need on the Koha side would be as follows:

GET /reports (GET all the reports) GET /reports/{reports_id}

PUT /reports/{reports_id}

POST /reports

DELETE /reports/{reports_id}

Diagram showing integration of FOLIO, Koha and reports visualisation tool


Koha has never worked in the way proprietary and even some open source projects work - we don’t try to build features and sell them. Every feature is built because a library has asked for it. So, timeframes for producing new features are a lot more flexible, and rely on libraries asking for what they need. FOLIO integration will be done when a library (or libraries) asks for it to be done. We would expect that to happen in the next year or two.


We hope that this paper provides libraries with some ideas for how Koha could work, with FOLIO as the connecting point for a range of integrated systems for libraries and the wider institution or community they serve. Perhaps some libraries will be inspired to look into FOLIO, and ask for certain connections or functions to be built to meet current or future requirements.

Using FOLIO with Koha and other applications still requires effort. We all know that the modern library user is so much more than someone who borrows books, and the expected user experience of a library customer is changing fast. Using FOLIO allows libraries to focus on one place for software to interoperate, so they can offer users more scope to interact with library collections, and reaching into systems provided by the wider institution.

If we have a strong base of libraries using FOLIO for integration, we can put more time into focusing on the user experience, and face less reinventions of the wheel, less custom work to do, with clearer pathways for how we can work together. This would be great news for everyone.

Published October 2017.

This paper can be downloaded as PDF.


About Catalyst IT

We’re the open source experts. The company was founded in 1997 on the belief that free and open source software provides the best solutions for clients. The five founding directors are still as committed to the business (and open source) as ever, and the company is now busy helping clients all over the world to solve business problems the open source way. Koha is a key part of Catalyst’s business and we’re excited to see more and more libraries reap the benefits of using the system.

About Chris Cormack

Chris Cormack, Kāi Tahu, KātiKātKāt e, has a BSc in Computer Science and a BA in Mathematics and Māori Studies from Massey University.

Chris was the lead developer on the original of Koha, produced by Katipo Communications for the Horowhenua Library Trust in 2000. Chris has remained active in the Koha community and is a key figure in the Koha team at Catalyst.

About Bywater Solutions

ByWater Solutions is a company of librarians supporting a product that was created for librarians – Koha Library Management System. The company was founded in 2009 by Brendan A. Gallagher and Nathan A. Curulla, and has grown to support over 1000 libraries worldwide since then.

About Brendan A. Gallagher

Brendan A. Gallagher, MLS Librarian, is an expert on installation, data migration, and customisation of many open source platforms. Brendan was a member of the first class of ALA Emerging Leaders where he focused on ways those in the library profession could re-brand themselves in the digital world. He was honoured as Alumni of the Year for the Southern Connecticut State University in 2011.

08 October 2017

Catalyst News

SmartStart wins at the Best Awards

We had a great night with our friends from DIA at the Best Awards last Friday.

Nicole and team at the best awards

28 September 2017

Catalyst News

Catalyst Training in October

Whether you are an existing user of open source technologies or are just getting started, Catalyst's experienced instructors and schedule of courses can help you make the most of the technology.

26 September 2017

Catalyst News

No degree? No problem.

We’re pleased to part of the movement. You might have heard about it in the news - an open letter signed by NZ companies pledging to change the conversation around qualifications and the hiring process.

17 September 2017

Andrew Ruthven

Missing opkg status file on LEDE...

I tried to install LEDE on my home router which is running LEDE, only to be told that libc wasn't installed. Huh? What's going on?! It looked to all intents as purposes as though libc wasn't installed. And it looked like nothing was installed.

What to do if opkg list-installed is returning nothing?

I finally tracked down the status file it uses as being /usr/lib/opkg/status. And it was empty. Oh dear.

Fortunately the info directory had content. This means we can rebuild the status file. How? This is what I did:

cd /usr/lib/opkg/info
for x in *.list; do
pkg=$(basename $x .list)
echo $pkg
opkg info $pkg | sed 's/Status: .*$/Status: install ok installed/' >> ../status

And then for the special or virtual packages (such as libc and the kernel):

for x in *.control; do
pkg=$(basename $x .control)
if ! grep -q "Package: $pkg" ../status
echo $pkg is missing; cat $x >> ../status

I then had to edit the file tidy up some newlines for the kernel and libc, and set the status lines correctly. I used "install hold installed".

Now I that I've shaved that yak, I can install tcpdump to try and work out why a VoIP phone isn't working. Joy.

02 September 2017

Andrew Ruthven

Network boot a Raspberry Pi 3

I found to make all this work I had to piece together a bunch of information from different locations. This fills in some of the blanks from the official Raspberry Pi documentation. See here, here, and here.


Download the latest raspbian image from and unzip it. I used the lite version as I'll install only what I need later.

To extract the files from the image we need to jump through some hoops. Inside the image are two partitions, we need data from each one.

 # Make it easier to re-use these instructions by using a variable
 fdisk -l $IMG

You should see some output like:

 Disk 2017-04-10-raspbian-jessie-lite.img: 1.2 GiB, 1297862656 bytes, 2534888 sectors
 Units: sectors of 1 * 512 = 512 bytes
 Sector size (logical/physical): 512 bytes / 512 bytes
 I/O size (minimum/optimal): 512 bytes / 512 bytes
 Disklabel type: dos
 Disk identifier: 0x84fa8189
 Device                               Boot Start     End Sectors  Size Id Type
 2017-04-10-raspbian-jessie-lite.img1       8192   92159   83968   41M  c W95 FAT32 (LBA)
 2017-04-10-raspbian-jessie-lite.img2      92160 2534887 2442728  1.2G 83 Linux

You need to be able to mount both the boot and the root partitions. Do this by tracking the offset of each one and multiplying it by the sector size, which is given on the line saying "Sector size" (typically 512 bytes), for example with the 2017-04-01 image, boot has an offset of 8192, so I mount it like this (it is VFAT):

 mount -v -o offset=$((8192 * 512)) -t vfat $IMG /mnt
 # I then copy the data off:
 mkdir -p /data/diskless/raspbian-lite-base-boot/
 rsync -xa /mnt/ /data/diskless/raspbian-lite-base-boot/
 # unmount the partition now:
 umount /mnt

Then we do the same for the root partition:

 mount -v -o offset=$((92160 * 512)) -t ext4 $IMG /mnt
 # copy the data off:
 mkdir -p /data/diskless/raspbian-lite-base-root/
 rsync -xa /mnt/ /data/diskless/raspbian-lite-base-root/
 # umount the partition now:
 umount /mnt


When I first set this up, I used OpenWRT on my router, and I had to patch /etc/init/dnsmasq to support setting DHCP option 43. As of the writting of this article, a similar patch has been merged, but isn't in a release yet, and, well, there may never be another release of OpenWRT. I'm now running LEDE, and the the good news is it already has the patch merged (hurrah!). If you're still on OpenWRT, then here's the patch you'll need:;a=commit;h=9412fc294995ae2543fabf84d2ce39a80bfb3bd6

This lets you put the following in /etc/config/dnsmasq, this says that any device that uses DHCP and has a MAC issued by the Raspberry PI Foundation, should have option 66 (boot server) and option 43 set as specified. Set the IP address on option 66 to the device that should be used for tftp on your network, if it's the same device that provides DHCP then it isn't required. I had to set the boot server, as my other network boot devices are using a different server (with an older tftpd-hpa, I explain the problem further down).

 config mac 'rasperrypi'
         option mac 'b8:27:eb:*:*:*'
         option networkid 'rasperrypi'
         list dhcp_option '66,'
         list dhcp_option '43,Raspberry Pi Boot'


Initially I used a version of tftpd that was too old and didn't support how the RPi tried to discover if it should use the serial number based naming scheme. The version of tftpd-hpa Debian Jessie works just fine. To find out the serial number you'll probably need to increase the logging of tftpd-hpa, do so by editing /etc/default/tftpd-hpa and adding "-v" to the TFTP_OPTIONS option. It can also be useful to watch tcpdump to see the requests and responses, for example ( is the IP of the RPi I'm working with):

  tcpdump -n -i eth0 host and dst port 69

This was able to tell me the serial number of my RPi, so I made a directory in my tftpboot directory with the same serial number and copied all the boot files into there. I then found that I had to remove the init= portion from the cmdline.txt file I'm using. To ease debugging I also removed quiet. So, my current cmdline.txt contains (newlines entered for clarity, but the file has it all on one line):

idwc_otg.lpm_enable=0 console=serial0,115200 console=tty1 root=/dev/nfs
ip=dhcp elevator=deadline rootwait

NFS root

You'll need to export the directories you created via NFS. My exports file has these lines:


And you'll also want to make sure you're mounting those correctly during boot, so I have in /data/diskless/raspbian-lite-base-root/etc/fstab the following lines:   /       nfs   rw,vers=3       0   0   /boot   nfs   vers=3,nolock   0   2

Network Booting

Now you can hopefully boot. Unless you into this bug, as I did. Where the RPi will sometimes fail to boot. Turns out the fix, which is mentioned on the bug report, is to put bootcode.bin (and only bootcode.bin) onto an SD card. That'll then load the fixed bootcode, and which will then boot reliably.

11 April 2017

Jonathan Harker

Australian Syrah then and now: current line-up

Tonight was the second part of a two-part tasting of Australian Shiraz with Geoff Kelly at Regional Wines & Spirits, the first being the 1996 library tasting (see previous post). This time we blind-tasted eleven new 2013-14 Australian Shiraz wines, including the Penfold Grange which is north of $850 per bottle, and with an Elephant Hill Hawke’s Bay 2013 Syrah thrown in to keep us honest.

Each wine was very well-built, young and purple, peppery and bold. Each wine had something to say, but unfortunately this time I exhausted my palate by the ninth, and couldn’t make head or tail of the last three. Shame, because although I liked them the Lloyd Reserve which I admired in the library tasting was hiding among them.

As we poured the blind wines into glasses, the colours of all the wines were good healthy young Syrah deep purple-red, although I could tell there would be something special about No. 6 and No. 9 just from the density of colour; No. 6 looked like you could stand a spoon up in it.

For me the remarkable wines were Nos. 3, 6, and 9.

No. 3 reminded me of a big, older-style blackcurrant jam Australian Shiraz, with lots of berry, ripe toffee and a long oaky finish. The minty, freshly-crushed basil leaf on the nose typical of South Australian Shiraz goes well; Geoff says if he likes it he calls it “mint”, or “eucalypt” otherwise. Someone else remarked this wine might be like Kylie crashing a Holden ute full of Foster’s into a blackberry patch. Enjoyable perhaps, but not especially subtle. No. 6 was the most beautifully dark rich purple-red, with an intoxicating, highly concentrated nose of mostly blackcurrant, but also warm florals and a whiff of rough-sawn timber. The wine itself was complex, initially spicy but with savoury meaty flavours and berries competing for space, with a longer finish. No. 9 for me was also a dense colour, with a peppery lavender on the nose and an interesting hint of baked dates or figs, not over-sweet but nicely integrated into the plum fruit flavours for a lingering complexity.

Once again we gathered some “wisdom of the crowd” data to see if as a group we could pick our wines, and this time we did a bit better; results are below.

Blind rating totals from the new 2013-14 Australian Syrah tasting.

The Penfolds Grange hiding at No. 6 was correctly identified by about half the group. I was overthinking things too much and was trying to re-taste the last three wines at this point, to find the rich, complex wine that would be a likely Grange candidate. I had assumed that, having never tasted it before, something as ludicrously expensive as the Grange might surely be less up in one’s grill with its big bold Aussie blackcurrants, so although No. 6 was beautifully dense and concentrated, I had assumed the Grange was busy being all sophisticated elsewhere. Once everyone’s hands shot up, however, it became clear the cat was out of the bag! The No. 9 I liked was the Elephant Hill 2014 Syrah Reserve, which surprised me, and the Lloyd Reserve from Coriole in McLaren Valley was hiding at No. 10, which was interesting to re-taste after The Grange. It has that torn basil leaf mint and lavender on the nose, with savory and plum, liquorice and a good long finish.

Of futher note was No. 11, the Cape Mentelle 2013 Shiraz from Margaret River in Western Australia. This was a more delicate wine than the others, with interesting and complex boquet of jasmine, perhaps roses, with a good plum fruit body and a nice mild spiciness like a hint of Christmas cake, with a good long-ish finish. It was certainly different enough from the others that three of us thought it was the Hawke’s Bay Syrah.

Herewith the full list of wines:

1. 2015 Wirra Wirra Shiraz Catapult, McLaren Vale, South Australia
2. 2013 Domaine Chandon Shiraz, Yarra Valley, Victoria
3. 2014 Burge Shiraz FilsellBarossa Valley,  SA
4. 2014 Two Hands Shiraz Gnarly Dudes, Barossa Valley, SA
5. 2014 John Duval Shiraz EntityBarossa & Eden Valley,  SA
6. 2012 Penfolds Shiraz Grange, Barossa Valley, SA
7. 2012 Wirra Wirra Shiraz RSWMcLaren Vale,  SA
8. 2012 Elderton Shiraz Command, Barossa Valley, SA
9. 2014 Elephant Hill Syrah ReserveHawkes Bay, New Zealand
10. 2013 Coriole Shiraz Lloyd Reserve, McLaren Vale, SA
11. 2013 Cape Mentelle Shiraz, Margaret River, West Australia
12. 2013 Seppelt Shiraz St Peters, Grampians, Victoria

30 March 2017

Jonathan Harker

Australian Syrah then and now: 1996 library tasting

Tonight we went to one of Geoff Kelly‘s illuminating wine tastings, held as ever at Regional Wines & Spirits next to the Basin Reserve in Wellington. This was part one of a two part tasting – a library tasting of 20 year-old Australian Shiraz wines, with a 1996 Hermitage thrown in as a yardstick; Next month part two will be a tasting of eleven new vintage Australian Shiraz with a good Hawke’s Bay Syrah to compare. Tonight was a blind tasting, in order to gather some interesting data from participants before revealing which wines were which.

It really is quite intimidating to try twelve magnificent 20 year-old red wines, and try to remain objective about comparing their colour and weight, nose (aroma), taste, complexity, and so on. As humans we’re notoriously bad at taste and smell compared to our other senses, so even just trying to identify the different flavours is a constant challenge. They are sometimes elusive or fleeting; there at the start, but then gone with the vapours a few minutes later. Sometimes they are maddeningly familiar, but the right word, recollection or label for it is just out of reach. Geoff, a true national treasure, runs a good show; reminding us not to speak too much aloud and cloud each others’ judgements, but dropping a few helpful hints and starting points to look for in aged reds, and Australian Syrah in particular, drawing on his 40 years of wine cellaring, judging, and writing.

Most of them were just as you’d imagine beautiful aged 20 year-old Syrah to be: plum or berry dominant, interesting florals, smooth, and tannins tamed by oak and time. That is, apart from No. 5 which to my nose was of fresh cowpat and sweaty horse. No. 7 to me had an unpleasant butyric bile odour, but it had weird almost salty savoury taste, like Parmigiano. My favourites were No. 3 for its sheer number and complexity of different and intriguing flavours, and its beautiful long velvety finish, and No. 8, which was a standout for me. It was the most purple-red of the set like it was only three years old, while all the others had aged to a fairly uniform red-ruby, near garnet colour. It had a bold nose of cognac, almond and cherry, with a slight floral element of jasmine and violets. Strong dark plum fruit but with a savoury hint of truffle, and its long-lingering tannins, whilst softened with the oak, were still unwinding even after all this time, and could probably go for another ten years.

Before revealing the wines, Geoff asked us to rate a first and second favourite, a least favourite, and which we thought was the French wine hiding in the glasses. This data set is tabulated below.

No. 5 was the 1996 Cape Mentelle from Margaret River, Western Australia, which might have had either a dose of brett or it was corked. No. 3 was the 1996 d’Arenberg Dead Arm from McLaren Vale, South Australia, and No. 8, my favourite, was the 1995 Coriole Lloyd Reserve, also from McLaren Vale. The No. 7 was the ludicrously expensive Hermitage (AOC Syrah from Rhône, France), the Jaboulet Hermitage La Chapelle; Jancis Robinson writes about this wine, here. Luckily for me, Regional Wines had a couple of the 2011 Lloyd Reserves in stock!

The full list of wines are detailed on Geoff’s library tasting page, and reproduced here:

1. 1996 Seppelt Shiraz Mount Ida, Heathcote, Victoria
2. 1996 Barossa Valley Estates E&E Shiraz Black Pepper, Barossa Valley
3. 1996 d’Arenberg Shiraz Dead-Arm, McLaren Vale, South Australia
4. 1996 Jim Barry Shiraz McRae WoodClare Valley, SA
5. 1996 Cape Mentelle Shiraz, Margaret River, West Australia
6. 1996 Burge Shiraz Meschach, Barossa Valley, SA
7. 1996 Jaboulet Hermitage La Chapelle, Northern Rhone Valley, France
8. 1995 Coriole Shiraz Lloyd’s Reserve, McLaren Vale, SA
9. 1996 Bannockburn Shiraz, Geelong, Victoria
10. 1997 Mount Langi Ghiran Shiraz Langi, Grampians, Victoria
11. 1996 Henschke Shiraz Mount EdelstoneEden Valley, SA
12. 1996 McWilliams Shiraz Maurice O’Shea, Hunter Valley, NSW

21 October 2016

Kristina Hoeppner


Getting the hang of hanging out (part 2)

A couple of days ago I experienced some some difficulties using YouTube Live Events. So today, I was all prepared:

  • Had my phone with me for 2-factor auth so I could log into my account on a second computer in order to paste links into the chat;
  • Prepared a document with all the links I wanted to paste;
  • Had the Hangout on my presenter computer running well ahead of time.

Indeed, I was done with my prep so much in advance that I had heaps of time and thus wanted to pause the broadcast as it looked like it was not actually broadcasting since I couldn’t see anything on the screen. So I thought I needed to adjust the broadcast’s start time.

Hence why I stopped the broadcast and as soon as I hit the button I knew I shouldn’t have. Stopping the broadcast doesn’t pause it, but stops it and kicks off the publishing process.

Yep, I panicked. I had about 10 minutes to go to my session and nobody could actually join it. Scrambling for a solution, I quickly set up another live event, tweeted the link and also sent it out to the Google+ group.

Then I changed the title of the just ended broadcast to something along the lines of “Go to description for new link”, put the link to the new stream into the description field and also in the chat as I had no other way of letting people know where I had gone and how they could join me.

I was so relieved when people showed up in the new event. That’s when the panic subsided, and I still had about 3 minutes to spare to the start of the session.

The good news? We released Mahara 16.10 and Mahara Mobile today (though actually, we soft-launched the app on the Google Play store already yesterday to ensure that it was live for today).

19 October 2016

Kristina Hoeppner


Getting the hang of hanging out (part 1)

Living in New Zealand, far, far away from the rest of the world (except maybe Australia), means that I’m doing a lot of online conference presentations, demonstrations, and meetings. I’ve become well-versed in a multitude of online meeting and conferencing software and know what works on Linux and what doesn’t.

The latter always give me a fright as I have to start up my VM and hope for the best that it will not die on me unexpectedly. Usually, closing Thunderbird and any browsers helps free some resources in order to let Windows start up. I can only dream of a world in which every conferencing software also runs on Linux.

Lately, some providers have gotten better and make use of WebRTC technology, which only requires a browser but no fancy additional software or flash. Only when I want to do screensharing do I need to install a plugin, which is done quickly.

So for meetings of fewer than 10 people, I’m usually set and can propose a nice solution like Jitsi, which works well. In the past, my go-to option was Firefox Hello for simple meetings, but that was taken off the market.

But what to do when there may be more than 10 people wanting to attend a session? Then it gets tough very quickly. So I have been trialling Google Hangouts on Air recently after I’ve seen David Bell use them successfully. It looked easy enough, but boy, was I in for a surprise.

Finding the dashboard

At some point, my YouTube account was switched to a “Creator Studio” one and so I can do live events. Google Hangouts on Air are now YouTube Live Events and need to be scheduled in YouTube.

There is no link from the YouTube homepage to the dashboard for uploading or managing content. I’d have thought that by clicking on “My channel” that I’d get somewhere, but far from it. There is nothing in the navigation.

The best choice is to click the “Video Manager” to get to a subpage of the creator area. Or, as I just found out, click your profile icon and then click the “Creator Studio” button.

Finding the creator dashboard

Getting to the creator dashboard either via the “Video Manager” on your channel or via the button under your profile picture.

Scheduling an event

Setting up an event is pretty straight forward as it’s like filling in the information for a video upload just with the added fields for event times.

Unfortunately, I haven’t found yet where I can change the placeholder for the video that is shown in the preview of the event on social media. It seems to set it to my channel’s banner image rather than allowing me to upload an event-specific image.

So once you have your event, you are good to go and can send people the link to it. The links that you get are only for the stream. They do not allow your viewers to actually join your hangout and communicate with you in there and that’s where it gets a bit bizarre and what prompted me to write this blog post so I can refer back to it in the future.

Different links for different hangouts

There is the hangout link and the YouTube event link

Streaming vs. Hangout

There are actually two components to the YouTube Live event (formerly known as Google Hangout on Air):

  1. The Hangout from which the presenter streams;
  2. The YouTube video stream that people watch.

In order to get into the Hangout, you click the “Start Hangout on Air” button on your YouTube events page. That takes you into a Google Hangout with the added buttons for the live event. You are supposed to see how many people joined in, but the count may be a bit off at times.

In that Google Hangout, you have all the usual functionality available of chats, screensharing, effects etc. You can also invite other people to join you in there. That will allow them to use the microphone. The interesting thing is that you can simply invite them via the regular Hangout invite. You can’t give them the link to the stream as they would not find the actual hangout. And if you only give people the link to the Hangout but not the stream, nobody will be in the stream.

Finding the relevant links in the hangout

You can also get the two different links from the hangout. Just make sure you get the correct one.

The YouTube video stream page only shows the content of the Hangout that is displayed in the video area, but not the chat. The live event has its separate chat that you can’t see in the Hangout! In order to see any comments your viewers make, you need to have the streaming page open and read the comments there.

In a way, it’s nice to keep the Hangout chat private because if you have other people join you in there as co-presenters, you can use that space to chat to each other without other viewers seeing what you type. However, it’s pretty inconvenient as you have to remember to check the other chat. Dealing with separate windows during a presentation can be daunting. It would be nicer to see the online chat also in the hangout window.

Today I even just fired up another computer and had the stream show there, which taught me another thing.

Having the stream on another computer also showed me how slow the connection was. The live event was at least 5 seconds behind if not more. That is something to consider when taking questions.

The stream was also very grainy. I was on a fast connection, but the default speed was on the lowest setting nevertheless. Fortunately, once I increased the resolution on the finished video, the video did get better. I don’t know if you could increase the setting during the stream.

Last but not least, I couldn’t present in full-screen mode as the window wouldn’t be recognized. I’ll have to try again and see if it works if I screenshare my entire desktop as it would be nicer not to show the browser toolbars.

Not sharing of links

When you are not the owner of the stream, you cannot post URLs. I’m pretty sure that is to prevent trolls misusing public YouTube events to post links. However, it’s pretty inconvenient for the rest who want to hold meetings and webinars and share content. You can’t post a single link. Only I as organizer could post links. Unfortunately, I found that out only after the event as I was logged in under a different account.

Being used to many other web conferencing software, I’ve come to like the backchannel and the possibility to post additional material, which are in many cases links, so people can simply click on them. This was impossible in the YouTube live event as I was only a regular user. And even had I logged in with my creator account, which I’ll certainly do during the next session on Friday, nobody else would have been able to post a link. That is very limiting. I wish it were possible to determine whether links were allowed or not.

Editing the stream

Once the event was over today, I went back to the video, but couldn’t find any editing tools. I started being discouraged as I had hoped to simply trim the front and the back a bit from non-essential chatter and then just keep the rest of the video online rather than trimming my local recording that I had done on top of the online recording, encoding that and uploading it. Before I could get sadder, I had to do some other work, and once I came back to the recording, I suddenly had all my regular editing tools available and rejoiced. Apparently, it takes a bit until all functionality is at your disposal.

So I trimmed the video, which was not easy, but I managed. And then it did its encoding online. After some time, the shortened recording was available and I didn’t have to send out a new link to the video. 🙂

Summing up

What does that mean for the next live event with YouTube events?

  1. Click the “Creator Studio” button under my Google / YouTube profile to get to the editor dashboard easily.
  2. Invite people who should have audio privileges through the Hangout rather than giving them the YouTube Live link, which is displayed more prominently.
    • Co-presenters are invited via Hangout.
    • Viewers get the YouTube live link.
  3. Open the YouTube Live event with the event creator account in order to be able to post links in the chat on YouTube. Have both the Hangout and the YouTube Live event open so you can see the online chat of those who aren’t in the Hangout.
  4. Take into account that there is a delay until the content is shown on YouTube.
  5. Once finished, wait a bit until all editing features are available and then go into post-production.

Remembering all these things will put me into a better position for the next webinar, which is a repeat session of today’s and showcases the new features of Mahara 16.10.

Update: Learn some more about YouTube Live events from my second webinar.

14 October 2016

Jonathan Harker

Learning the contrabass trombone

Wessex Contrabass in F and Shires bass trombone, side by side.

I’ve recently acquired a Wessex contrabass trombone in F. It is pretty much a knock-off of the Thein Ben van Dijk model, and compared to this gold standard of contrabass trombone, this instrument is about an eighth of the price and a perfectly decent instrument. It plays really well throughout the range and the slide, valves and bell are all of high build quality, unlike the notorious Chinese-made instruments of the past.

But really, this post is just an excuse to test out a nifty music notation WordPress plugin. The shorthand it uses is ABC which is a bit quaint compared to Lilypond, but it seems to work well enough. For instance, take the first scale we might learn on a contrabass trombone:

The contrabass trombone in F only has six positions on the open slide instead of seven. Furthermore, only the first five are actually practical, unless you are Tarzan, so we can play the G on the first (D) valve in third position. While the A is also theoretically available in first position on the D valve, it is indistinct and slightly flat. Play it on the open slide in fourth. Good. Now, how about an excerpt from Ein Alpensinfonie by Richard Strauss:

Sounds good! Now, pop along to the NZSO performance in March 2017 to hear Shannon playing it, live in concert! In the meantime, here’s this excerpt by Berlin Philharmoniker:

11 October 2016

Kristina Hoeppner


Mahara Hui @ AUT recap

I’m playing catch-up and working my way backwards of my events. Yesterday, I wrote a bit about the NZ MoodleMoot on 5 October 2016. Just a day before that, AUT organized a local half-day Mahara Hui, Mahara Hui @ AUT 2016. Lisa Ransom and Shen Zhang from CfLAT (Centre for Learning and Teaching) were responsible for the event and did well wrangling everything and made all attendees feel welcome.

It was great to catch up with lecturers and learning technology support staff from AUT, Unitec and University of Waikato, and with a user from Nurseportfolio. We started the day out with introductions and examples of how people use Mahara.

Mahara in New Zealand tertiaries

At AUT, the CfLAT team trained about 630 students this academic year, in particular Public Policy, Tourism and Midwifery. Paramedics are also starting to use ePortfolios and can benefit from the long experience that Lisa and Shen have supporting other departments at AUT.

Linda reported that Mahara is now also being used in culinary studies in elective courses as well as degree papers. They use templates to help students get started, but then let them run with it. Portfolios are well suited for culinary students as they can showcase their work as well as document their creation progress and improve their work.

She also showcased a portfolio from a new lecturer who became a student in her area of expertise, going through a portfolio assignment with her students to see for herself how the portfolios worked and what she could and wanted to expect from her students. By going through the activity herself, she became an expert and now has a better understanding of the portfolio work.

John, an AUT practicum leader, who was new to AUT, came along to the hui and said that they were starting to use portfolios for their lesson plans and goals. Reflections are expected from the future teachers and form an important aspect. I’m sure we’ll hear more from him.

Sally from Nursing at AUT is looking at Mahara again, and the instructor could form connections directly with Unitec and Nurseportfolio, which is fantastic, because that’s what these hui are about: Connecting people.

JJ updated the group on the activities at Unitec. Medical imaging is going digital and looking into portfolios, and they also created a self-paced Moodle course on how to teach with Mahara effectively so that lecturers at Unitec can get a good overview.

Stephen from the University of Waikato gave an overview of the portfolio activities  at his university. Waikato still works with two systems, for education students becoming teachers, and the new Waikato-hosted Mahara site. Numerous faculties at Waikato now work with portfolios. If you’d like to find out more directly, you can watch recordings from the last WCELfest, in particular the presentations by Richard Edwards, Sue McCurdy and Stephen Bright. Portfolios will be used even more in the future as evidence from general papers will need to be collected in them by every student.

We also discussed a couple of ideas from a lecturer and are interested in other people’s opinion on them. One idea was to be able to share portfolios more easily in social networks and then see directly when the portfolio was updated and share those news again. The other idea was to show people who are interested in the portfolios when new content has been added. The latter is already possible to a degree with the watchlist. However, there students or lecturers still need to put specific pages on the watchlist first rather than the changes coming to them. The enhancements that Gregor is planning for the watchlist goes more in that direction.

Mahara 16.10

In a second part of the hui, I presented the new features of Mahara 16.10, and we spent a bit of time on taking a closer look at SmartEvidence.

I’m very excited that this new version will be live very soon and look forward to the feedback by users on how SmartEvidence works out for them. It’s the initial implementation. While it doesn’t contain all the bells and whistles, I think it is a great beginning to get the conversations started around use cases besides the ones we had and see how flexible it is.

Next hui and online meetings

If you want to share how you are using Mahara, you’ll have the opportunity to do so in Wellington on 27 October 2016 when we’ll have another local Mahara Hui, Mahara Hui @ Catalyst. From 5 to 7 April 2017, we are planning a bigger Mahara Hui again in Auckland. More information will be shared soon on the Mahara Hui website.

There will also be two MUGOZ online meetings on 19 and 21 October 2016 in which I’ll be presenting the new Mahara 16.10 features. You are welcome to attend either of these 1-hour sessions organized by the Australian Mahara User Group. Since the sessions are online, anybody can tune in.

24 July 2016

Andrew Ruthven

Allow forwarding from VoiceMail to cellphones

Something I've been wanting to do with our Asterisk PBX at Catalyst for a while is to allow having callers that hit VoiceMail to be forwarded the callee's cellphone if allowed. As part of an Asterisk migration we're currently carrying out I finally decided to investigate what is involved. One of the nice things about the VoiceMail application in Asterisk is that callers can hit 0 for the operator, or * for some other purpose. I decided to use * for this purpose.

I'm going to assume a working knowledge of Asterisk dial plans, and I'm not going to try and explain how it works. Sorry.

When a caller hits * the VoiceMail application exits and looks for a rule that matches a. Now, the simple approach looks like this within our macro for handling standard extensions:

exten => a,1,Goto(pstn,027xxx,1)

(Where I have a context called pstn for placing calls out to the PSTN).

This'll work, but anyone who hits * will be forwarded to my cellphone. Not what I want. Instead we need to get the dialled extension into a place where we can perform extension matching on it. So instead we'll have this (the extension is passed into macro-stdexten as the first variable - ARG1):

exten => a,1,Goto(vmfwd,${ARG1},1)

Then we can create a new context called vmfwd with extension matching (my extension is 7231):

exten => 7231,1,Goto(pstn,027xxx,1)

I actually have a bit more in there to do some logging and set the caller ID to something our SIP provider will accept, but you get the gist of it. All I need to do is to arrange for a rule per extension that is allowed to have their VoiceMail callers be forwarded to voicemail. Fortunately I have that part automated.

The only catch is for extensions that aren't allowed to be forwarded to a cellphone. If someone calling their VoiceMail hits * their call will be hung up and I get nasty log messages about no rule for them. How do we handle them? Well, we send them back to VoiceMail. In the vmfwd context we add a rule like this:

exten => _XXXX,1,VoiceMail(${EXTEN}@sip,${voicemail_option})
  same => n,Hangup

So any extension that isn't otherwise matched hits this rule. We use ${voicemail_option} so that we can use the same mode as was used previously.

Easy! Naturally this approach won't work for other people trying to do this, but given I couldn't find write ups on how to do this, I thought it be might be useful to others.

Here's my macro-stdexten and vmfwd in full:

exten => s,1,Progress()
exten => s,n,Dial(${ARG2},20)
exten => s,n,Goto(s-${DIALSTATUS},1)
exten => s-NOANSWER,1,Answer
exten => s-NOANSWER,n,Wait(1)
exten => s-NOANSWER,n,Set(voicemail_option=u)
exten => s-NOANSWER,n,Voicemail(${ARG1}@sip,u)
exten => s-NOANSWER,n,Hangup
exten => s-BUSY,1,Answer
exten => s-BUSY,n,Wait(1)
exten => s-BUSY,n,Set(voicemail_option=b)
exten => s-BUSY,n,Voicemail(${ARG1}@sip,b)
exten => s-BUSY,n,Hangup
exten => _s-.,1,Goto(s-NOANSWER,1)
exten => a,1,Goto(vmfwd,${ARG1},1)
exten => o,1,Macro(operator)


exten => _XXXX,1,VoiceMail(${EXTEN}@sip,${voicemail_option})
  same => n,Hangup

#include extensions-vmfwd-auto.conf

And I then build extensions-vmfwd-auto.conf from a script that is used to generate configuration files for defining accounts, other dial plan rule entries and phone provisioning files.

With thanks to John Kiniston for the suggestion about the wildcard entry in vmfwd.

25 August 2014

Dan Marsden

SCORM hot topics.

As a follow up from the GSOC post I thought it might be useful to mention a few things happening with SCORM at the moment.

There are currently approx 71 open issues related to SCORM in the Moodle tracker at the moment, of those 38 are classed as bugs/issues I should fix in stable branches at some point, 33 are issues that are really feature/improvement requests.

Issues about to be fixed and under development
MDL-46639 – External AICC packages not working correctly.
MDL-44548 – SCORM Repository auto-update not working.

Issues that are  high in my list of things to look at and I hope to look at sometime soon.
MDL-46961 – SCORM player not launching in Firefox when new window being used.
MDL-46782 – Re-entry of a scorm not using suspend_data or resuming itself should allow returning to the first sco that is not complete.
MDL-45949 – The TOC Tree isn’t quite working as it should after our conversion to YUI3 – it isn’t expanding/collapsing in a logical manner – could be a bit of work here to make this work in the right way.

Issues recently fixed in stable releases.
MDL-46940 – new window option not working when preview mode disabled.
MDL-46236 – Start new attempt option ignored if new window used.
MDL-45726 – incorrect handling of review mode.

New improvements you might not have noticed in 2.8 (not released yet)
MDL-35870 -Performance improvements to SCORM
MDL-37401 -SCORM auto-commit – allows Moodle to save data periodically even if the SCORM doesn’t call “commit”

New improvements you might not have noticed in 2.7:
MDL-28261 -Check for live internet connectivity while using SCORM – warns user if SCORM is unable to communicate with the LMS.
MDL-41476 – The SCORM spec defines a small amount of data that can be stored when using SCORM 1.2 packages, we have added a setting that allows you to disable this restriction within Moodle to allow larger amounts of data to be stored (you may need to modify your SCORM package to send more data to make this work.)

Thanks to Ian Wild, Martin Holden, Tony O’Neill, Peter Bowen, André Mendes, Matteo Scaramuccia, Ray Morris, Vignesh, Hansen Ler, Faisal Kaleem and many other people who have helped report/test and suggest fixes related to SCORM recently including the Moodle HQ Integration team (Eloy, Sam, Marina, Dan, Damyon, Rajesh) who have all been on the receiving end of reviewing some SCORM patches recently!

GSOC 2014 update

Another year of GSOC has just finished and Vignesh has done a great job helping us to improve a number of areas of SCORM!
I’m really glad to finally have some changes made to the JavaScript datamodel files as part of MDL-35870 – I’m hoping this will improve the performance of the SCORM player as the JavaScript can now be cached properly by the users browser rather than dynamically generating it using PHP.

Vignesh has made a number of general bug fixes to the SCORM code and has also tidied up the code in the 2.8 branch so that it now complies with Moodle’s coding guidelines.

These changes have involved almost every single file in the SCORM module and significant architectural changes have been made. We’ve done our best to avoid regresssions (thanks Ray for testing SCORM 2004) but due to the large number of changes (and the fact that we only have 1 behat test for SCORM) It would be really great if people could test the 2.8 branch with their SCORM content before release so we can pick up any other regressions that may have occurred.

Thanks heaps to Vignesh for his hard work on SCORM during GSOC – and kudos to Google for running a great program and providing the funding to help it happen!

10 July 2014

Dan Marsden

Goodbye Turnitin…

Time to say goodbye to the “Dan Marsden Turnitin plugin”… well almost!

Turnitin have done a pretty good job of developing a new plugin to replace the code that I have been working on since Moodle 1.5!

The new version of their plugin contains 3 components:

  1. A module (called turnitintool2) which contains the majority of the code for connecting to their new API and is a self-contained activity like their old “turnitintool” plugin
  2. A replacement plugin for mine (plagiarism_turnitin) which allows you to use plagiarism features within the existing Moodle Assignment, Workshop and forum modules.
  3. A new Moodle block that works with both the above plugins.

The Plugins database entry has been updated to replace my old code with the latest version from Turnitin, we have a number of clients at Catalyst using the new plugin and the migration has mostly gone ok so far – there are a few minor differences between my plugin and the new version from Turnitin so I encourage everyone to test the upgrade to the new version before running it on their production sites.

I’m encouraging most of our clients to update to the new plugin at the end of this year but I will continue to provide basic support for my version running on all Moodle versions up to Moodle 2.7 and my code continues to be available from my github repository here:

Thanks to everyone who has helped in the past with the plugin I wrote – hopefully this new version from Turnitin will meet everyone’s needs!

31 October 2012

Chris Cormack

Signoff statistics for October 2012

Here are the signoff statistics for bugs in October 2012
  • Kyle M Hall- 24
  • Owen Leonard- 18
  • Chris Cormack- 15
  • Nicole C. Engard- 10
  • Mirko Tietgen- 9
  • Marc Véron- 6
  • Frédéric Demians- 5
  • Jared Camins-Esakov- 5
  • Magnus Enger- 4
  • Jonathan Druart- 4
  • M. de Rooy- 3
  • Melia Meggs- 3
  • wajasu- 2
  • Paul Poulain- 2
  • Fridolyn SOMERS- 2
  • Tomás Cohen Arazi- 2
  • Matthias Meusburger- 1
  • Katrin Fischer- 1
  • Julian Maurice- 1
  • Koha Team Lyon 3- 1
  • Mason James- 1
  • Elliott Davis- 1
  • mathieu saby- 1
  • Robin Sheat- 1

16 October 2012

Chris Cormack

Unsung heroes of Koha 26 – The Ada Lovelace Day Edition

Darla Grediagin

Darla has been using Koha from 2006, for the Bering Strait School District in Alaska. This is pretty neat in itself, what is cooler is that as far as I know, they have never had a ‘Support Contract’. Doing things either by themselves or with the help of IT personnel as needed. One of Darla’s first blogposts that I read was about her struggles trying to install Debian on an Emac. I totally respect anyone who is trying to reclaim hardware from the darkside 🙂

Darla has presented on Koha at conferences, and maintains a blog that has useful information, including sections of what she would do differently. As well as some nice feel good bits like this, from April 2007

I know I had an entry titled this before, but I do love OSS programs.   Yesterday I mentioned that I would look at Pines because I like the tool it has to merge MARC records.  Today a Koha developer emailed me to let me know that he is working on this for Koha and it should be available soon.  I can’t imagine getting that kind of service from a vendor.

Hopefully she will be able to make it Kohacon13 in Reno, NV. It would be great to put a face to the email address 🙂

10 October 2012

Chris Cormack

New Release team for Koha 3.12

Last night on IRC the Koha Community elected a new release team, for the 3.12 release. Once again it is a nicely mixed team, there are 16 people involved, from  8 different countries (India, New Zealand, USA, Norway, Germany, France, Netherlands, Switzerland) and four of the 12 roles are filled by women.

The release team will be working super hard to bring you the best release of Koha yet, and you can help:

  • Reporting bugs
  • Testing bug fixes
  • Writing up enhancement requests
  • Using Koha
  • Sending cookies
  • Inventing time travel
  • Killing MARC
  • Winning the lottery and donating the proceeds to the trust to use for Koha work.

24 July 2012

Pass the Source

Google Recruiting

So, Google are recruiting again. From the open source community, obviously. It’s where to find all the good developers.

Here’s the suggestion I made on how they can really get in front of FOSS developers:

Hi [name]

Just a quick note to thank you for getting in touch of so many our
Catalyst IT staff, both here and in Australia, with job offers. It comes
across as a real compliment to our company that the folks that work here
are considered worthy of Google’s attention.

One thing about most of our staff is that they *love* open source. Can I
suggest, therefore, that one of the best ways for Google to demonstrate
its commitment to FOSS and FOSS developers this year would be to be a
sponsor of the NZ Open Source Awards. These have been very successful at
celebrating and recognising the achievements of FOSS developers,
projects and users. This year there is even an “Open Science” category.

Google has been a past sponsor of the event and it would be good to see
you commit to it again.

For more information see:

Many thanks

09 July 2012

Andrew Caudwell

Inventing On Principle Applied to Shader Editing

Recently I have been playing around with GLSL Sanbox (github), a what-you-see-is-what-you-get shader editor that runs in any WebGL capable browser (such as Firefox, Chrome and Safari). It gives you a transparent editor pane in the foreground and the resulting compiled fragment shader rendered behind it. Code is recompiled dynamically as the code changes. The latest version even has syntax and error highlighting, even bracket matching.

There have been a few other Webgl based shader editors like this in the past such as Shader Toy by Iñigo Quílez (aka IQ of Demo Scene group RGBA) and his more recent (though I believe unpublished) editor used in his fascinating live coding videos.

Finished compositions are published to a gallery with the source code attached, and can be ‘forked’ to create additional works. Generally the author will leave their twitter account name in the source code.

I have been trying to get to grips with some more advanced raycasting concepts, and being able to code something up in sandbox and see the effect of every change is immensely useful.

Below are a bunch of my GLSL sandbox creations (batman symbol added by @emackey):



GLSL Sandbox is just the latest example of the merit of software development tools that provide immediate feedback, and highlights the major advantages of scripting languages have over heavy compiled languages with long build and linking times that make experimentation costly and tedious. Inventing on Principle, a presentation by Bret Victor, is a great introduction to this topic.

I would really like a save draft button that saves shaders locally so I have some place to save things that are a work in progress, I might have to look at how I can add this.

Update: Fixed links to point at

05 June 2012

Pass the Source

Wellington City Council Verbal Submission

I made the following submission on the Council’s Draft Long Term Plan. Some of this related to FLOSS. This was a 3 minute slot with 2 minutes for questions from the councillors.


I have been a Wellington inhabitant for 22 years and am a business owner. We employ about 140 staff in Wellington, with offices in Christchurch, Sydney, Brisbane and the UK. I am also co-chair of NZRise which represents NZ owned IT businesses.

I have 3 Points to make in 3 minutes.

1. The Long Term plan lacks vision and is a plan for stagnation and erosion

It focuses on selling assets, such as community halls and council operations and postponing investments. On reducing public services such as libraries and museums and increasing user costs. This will not create a city where “talent wants to live”. With this plan who would have thought the citizens of the city had elected a Green Mayor?

Money speaks louder than words. Both borrowing levels and proposed rate increases are minimal and show a lack of investment in the city, its inhabitants and our future.

My company is about to open an office in Auckland. A manager was recently surveying staff about team allocation and noted, as an aside, that between 10 and 20 Wellington staff would move to Auckland given the opportunity. We are not simply competing with Australia for hearts and minds, we are competing with Auckland whose plans for investment are much higher than our own.

2. Show faith in local companies

The best way to encourage economic growth is to show faith in the talent that actually lives here and pays your rates. This means making sure the council staff have a strong direction and mandate to procure locally. In particular the procurement process needs to be overhauled to make sure it does not exclude SME’s (our backbone) from bidding for work (see this NZCS story). It needs to be streamlined, transparent and efficient.

A way of achieving local company participation in this is through disaggregation – the breaking up large-scale initiatives into smaller, more manageable components. For the following reasons:

  • It improves project success rates, which helps the public sector be more effective.
  • It reduces project cost, which benefits the taxpayers.
  • It invites small business, which stimulates the economy.

3. Smart cities are open source cities

Use open source software as the default.

It has been clear for a long time that open source software is the most cost effective way to deliver IT services. It works for Amazon, Facebook, Red Hat and Google and just about every major Silicon Valley success since the advent of the internet. Open source drives the internet and these companies because it has an infinitely scalable licensing and model – free. Studies, such as the one I have here from the London School of Economics, show the cost effectiveness and innovation that comes with open source.

It pains me to hear about proposals to save money by reducing libraries hours and increasing fees, when the amount of money being saved is less than the annual software licence fees our libraries pay, when world beating free alternatives exist.

This has to change, looking round the globe it is the visionary and successful local councils that are mandating the use of FLOSS, from Munich to Vancouver to Raleigh NC to Paris to San Francisco.

As well as saving money, open source brings a state of mind. That is:

  • Willingness to share and collaborate
  • Willingness to receive information
  • The right attitude to be innovative, creative, and try new things

Thank you. There should now be 2 minutes left for questions.

05 January 2012

Pass the Source

The Real Tablet Wars

tl;dr formally known as Executive Summary, Openness + Good Taste Wins

Gosh, it’s been a while. But this site is not dead. Just been distracted by and twitter.

I was going to write about Apple, again. A result of unexpected and unwelcome exposure to an iPad over the Christmas Holidays. But then I read Jethro Carr’s excellent post where he describes trying to build the Android OS from Google’s open source code base. He quite mercilessly exposes the lack of “open” in some key areas of that platform.

It is more useful to look at the topic as an issue of “open” vs “closed” where iPad is one example of the latter. But, increasingly, Android platforms are beginning to display similar inane closed attributes – to the disadvantage of users.

Part of my summer break was spent helping out at the premier junior sailing regatta in the world, this year held in Napier, NZ. Catalyst, as a sponsor, has built and is hosting the official website.

I had expected to swan around, sunbathing, drinking cocktails and soaking up some atmosphere. Instead a last minute request for a new “live” blogging section had me blundering around Joomla and all sorts of other technology with which I am happily unfamiliar. Days and nightmares of iPads, Windows, wireless hotspots and offshore GSM coverage.

The plan was simple, the specialist blogger, himself a world renown sailor, would take his tablet device out on the water on the spectator boat. From there he would watch and blog starts, racing, finishes and anguished reactions from parents (if there is one thing that unites races and nationalities, it is parental anguish over sporting achievement).

We had a problem in that the web browser on the tablet didn’t work with the web based text editor used in the Joomla CMS. That had me scurrying around for a replacement to the tinyMCE plugin, just the most common browser based editing tool. But a quick scan around various forums showed me that the alternative editors were not a solution and that the real issue was a bug with the client browser.

“No problem”, I thought. “Let’s install Firefox, I know that works”.

But no, Firefox is not available to iPad users  and Apple likes to “protect” its users by only tightly controlling whose applications are allowed to run on the tablet. Ok, what about Chrome? Same deal. You *have* to use Apple’s own buggy browser, it’s for your own good.

Someone suggested that the iPad’s operating system we were using needed upgrading and the new version might have a fixed browser. No, we couldn’t do that because we didn’t have Apple’s music playing software, iTunes, on a PC. Fortunately Vodafone were also a sponsor and not only did they download an upgrade they had iTunes handy. Only problem, the upgrade wiped all the apps that our blogger and his family had previously bought and installed.

Er, and the upgrade failed to fix the problem. One day gone.

So a laptop was press ganged into action, which, in the end was a blessing because other trials later showed that typing blogs fast, on an ocean swell, is very hard without a real keyboard. All those people pushing tablets at schools, keep in mind it is good to have our children *write* stuff, often.

The point of this post is not really to bag Apple, but to bag the mentality that stops people using their own devices in ways that help them through the day. I only wanted to try a different browser to Safari, not an unusual thing to do. Someone else might want to try out a useful little application a friend has written for them, but that wouldn’t be allowed.

But the worst aspect of this is that because of Apple’s success in creating well designed gadgets other companies have decided that “closed” is also the correct approach to take with their products. This is crazy. It was an open platform, Linux Kernel with Android, that allowed them to compete with Apple in the first place and there is no doubt that when given a choice, choice is what people want – assuming “taste” requirements are met.

Other things being equal*, who is going to chose a platform where the company that sold you a neat little gadget controls all the things you do on it? But there is a strong trend by manufacturers such as Samsung, and even Linux distributions, such asUbuntu, to start placing restrictions on their clients and users. To decide for all of us how we should behave and operate *our* equipment.

The explosive success of the personal computer was that it was *personal*. It was your own productivity, life enhancing device. And the explosive success of DOS and Windows was that, with some notable exceptions, Microsoft didn’t try and stop users installing third party applications. The dance monkey boy video is funny, but the truth is that Microsoft did want “developers, developers, developers, developers” using its platforms because, at the time, it knew it didn’t know everything.

Apple, Android handset manufacturers and even Canonical (Ubuntu) are falling into the trap of not knowing that there is stuff they don’t know and they will probably never know. Similar charges are now being made about Facebook and Twitter. The really useful devices and software will be coming from companies and individuals who realise that whilst most of what we all do is the same as what everyone else does, it is the stuff that we do differently that makes us unique and that we need to control and manage for ourselves. Allow us do that, with taste, and you’ll be a winner.

PS I should also say “thanks” fellow sponsors Chris Devine and Devine Computing for just making stuff work.

* I know all is not equal. Apple’s competitive advantage it “has taste” but not in its restrictions.

18 May 2011

Andrew Caudwell

Show Your True Colours

This last week saw the release of fairly significant update to Gource – replacing the out dated, 3DFX-era rendering code, with something a bit more modern, utilizing more recent OpenGL features like GLSL pixel shaders and VBOs.

A lot of the improvements are under the hood, but the first thing you’ll probably notice is the elimination of banding artifacts in Bloom, the illuminated fog Gource places around directories. This effect is pretty tough on the ‘colour space’ of so called Truecolor, the maximum colour depth on consumer monitors and display devices, which provides 256 different shades of grey to play with.

When you render a gradient across the screen, there are 3 or 4 times more pixels than there are shades of each colour, producing visible ‘bands’ of the same shade. If multiple gradients like this get blended together, as happens with bloom, you simply run out of ‘in between’ colours and the issue becomes more exaggerated, as seen below (contrast adjusted for emphasis):


Those aren’t compression artifacts you’re seeing!

Gource now uses colour diffusion to combat this problem. Instead of sampling the exact gradient of bloom for the distance of a pixel from the centre of a directory, we take a fuzzy sample in that vicinity instead. When zoomed in, you can see the picture is now slightly noisy, but the banding is completely eliminated. Viewed at the intended resolution, you can’t really see the trickery going on – in fact the effect even seems somewhat more natural, a bit closer to how light bouncing off particles of mist would actually behave.


The other improvement is speed – everything is now drawn with VBOs, large batches of objects geometry passed to the GPU in as few shipments as possible, eliminating CPU and IO bottle necks. Shadows cast by files and users are now done in a second pass on GPU using the same geometry as used for the lit pass – making them really cheap compared to before when we effectively wore the cost of having to draw the whole scene twice.

Text is now drawn in single pass, including shadows, using some fragment shader magic (take two samples of the font texture, offset by 1-by-1 pixels, blend appropriately). Given the ridiculous amount of file, user and directory names Gource draws at once with some projects (Linux Kernel Git import commit, I’m looking at you), doing half as much work there makes a big difference.

06 October 2010

Andrew Caudwell

New Zealand Open Source Awards

I discovered today that Gource is a finalist in the Contributor category for the NZOSA awards. Exciting stuff! A full list of nominations is here.

I’m currently taking a working holiday to make some progress on a short film presentation of Gource for the Onward!.

Update: here’s the video presented at Onward!:

Craig Anslow presented the video on my behalf (thanks again Craig!), and we did a short Q/A over Skype afterwards. The music in the video is Aksjomat przemijania (Axiom of going by) by Dieter Werner. I suggest checking out his other work!

14 August 2009

Piers Harding

Auth SAML 2.0 for Mahara

Following on from the SAML 2.0 work that I've done recently for Moodle, I thought it was useful to do the same for the Mahara ePortfolio service, while I was in the same space. Details of the first release can be found here, with tested version for both trunk, and 1.1_STABLE.

02 August 2009

Piers Harding

Moodle and SAML 2.0 Web SSO

Of late I have been doing a lot of SSO integration work for the NZ Ministry of Education, and during this time I came across an excellent project FEIDE. One of the off shoots of this has been the development of a high quality PHP library for SAML 2.0 Web SSO - SimpleSAMLPHP.

For Moodle integration, Erlend Strømsvik of Ny Media AS, developed an authentication plugin, which I've made a number of changes to around configuration options, and Moodle session integration. This has now been documented and added to Moodle Contrib to give it better visibility to the Moodle community at large. Documentation is here and the contrib entry is here.

27 June 2009

Piers Harding

Perl sapnwrfc 0.30

I doing some work for a client recently, I got the opportunity to do some major performance work on sapnwrfc for Perl. The net result is that a number of memory leaks, mainly of Perl values not going out of scope properly, have been fixed.

Additionally, I've had some time to put together a proper cookbook style set of examples in the sapnwrfc-cookbook. These examples, while specifically for Perl, are almost identical for sapnwrfc for Python, Ruby, and PHP too.