Catalyst IT Limited  
Planet Catalyst
 

06 December 2017

23 November 2017

Catalyst Blog

face

Serverless Computing in OpenStack

by Bruno Lago

Qinling delivers Function as a Service (Serverless Computing) in OpenStack clouds.

Lingxian Kong and Feilong Wang are senior cloud engineers from the Catalyst Cloud and long-term contributors of OpenStack. At the 2017 OpenStack Summit in Sydney, they have announced the Qinling project. Qinling enables customers to run functions (or serverless computing) in OpenStack clouds, similar to AWS Lambda, Azure Functions or Google Cloud Functions.

The service was demonstrated live using two common use cases:
1. Running functions that resize images as they are uploaded to an object storage container;
2. Monitoring a web application and sending an SMS notification when it is down.

Qinling integrates with the alarm service in OpenStack (Aodh), so that it can trigger functions based on events from other services. It also integrates with the messaging and notification service (Zaqar), allowing functions to be triggered by messages, or using messages to support the communication between functions. The notification service in Zaqar also allows functions to easily send email or SMS notifications.

On the backend side, Qinling supports multiple container orchestration systems such as Kubernetes and Docker Swarm. In terms of runtime environments, a sample Python runtime environment is already available and Node.js is expected to be supported next.

The presentation was well received by the community and sparked a lot of interest for collaboration among vendors, in particular from other public cloud providers. Lingxian is now focusing on onboarding contributors from the community and preparing for a stable release in OpenStack Queens.

The Catalyst Cloud team is excited about the possibilities that Qinling and Zaqar will bring to our customers. Our research and development team is working hard to bring these advancements to our public cloud, ready to be consumed by enterprise customers.

The OpenStack Summit presentation has been recorded and can be seen below. Scott Lowe also published a summary of the presentation on his tech blog.

22 November 2017

Catalyst Blog

face

Blockchain 101 - part 2

by Russell Michell

Picture of chain

Introduction

In my previous post I covered key concepts fundamental to understanding blockchain technology, and detailed a small handful of real world use-cases. In this post I'd like to go a little further and cover some fundamentals crucial to understanding exactly how blockchain participants (human or software) can have trust in a system that is inherently trustless.

A recap

Generally speaking, blockchain is technology comprising a distributed network of computers called nodes, each running software designed to perform several functions. The key functions are transaction validation, chain security and chain building using consensus algorithms.

Consensus

But what exactly do we mean by consensus? How does it factor into the near-instant transfer of value without double spending? How is it leveraged to achieve systems of decentralised storage and Distributed Autonomous Organisations, particularly between participants who may not formally trust each other outside the system?

Wikipedia describes consensus as “...a decision in the best interest of the whole...”. Wikipedia itself utilises consensus when the community modifies controversial articles, or articles requiring several areas of expertise. It is fundamentally about group decision-making in order to arrive at a single source of truth. But to understand consensus in a blockchain, we first need to understand what exactly such a mechanism is trying to achieve; securing against malicious or faulty behaviour from nodes, from incoming transaction data (problems that are down to the Byzantine Generals Problem) and the synchronisation of each node’s knowledge of the network’s current state, being but three examples.

In terms of human organisations, a national army comprises a military hierarchy where decisions concerning actions affecting many infantry units, are traditionally made by a single member – the General. In a co-operative organisation such as New Zealand’s Fonterra, proposed fiscal or organisational legislative action is always performed by means of a decision made by more than one member. But in order for members of either of the above examples to enact or approve decisions, a level of trust in those that make or propose them is required by the rest of the organisation.

Organisationally speaking. trust comes from varied sources; industry or academic qualifications or perhaps appointment by someone already in possession of trust. In the absence of these, and sometimes in addition to them; prior friendships, mood, whim and other biases also affect outcomes. When implemented in a blockchain however, consensus is a system of decision-making to reach agreement among the network members - its peers. Human biases are eliminated in favour of systems of mathematics, logic and cryptography, packaged up in a transparent and open manner using open source technology and open data. However, such systems are not without issues of their own as we shall see when we discuss permissionless and permissioned blockchain implementations shortly.

Consensus Implementations

Depending on your definition, there are around 20 different blockchain and blockchain-like implementations, each designed to do a specific job. Bitcoin for the transmission and store of value in Bitcoins, Ethereum for long-running programs of almost any kind, IOTA for micro-transactions between devices in an IoT (Internet of Things) network, HyperLedger Fabric, Sawtooth and Iroha for enterprise, and many others. It is the sheer variation of distinct use-cases that requires different ways of validating transactions, securing networks and appending blocks.

Proof of Work

The Proof of Work (PoW) algorithm built into bitcoin known as HashCash, was originally created to reduce email spam. It was thought that if a charge could be applied to the sending of bulk email, the cost to spammers would become prohibitive. When used for Blockchain, immutability is one of HashCash’s prime features. Transaction accuracy can be (almost) guaranteed by requiring nodes to expend energy in the form of electricity to perform a computationally expensive puzzle, resulting in new blocks being added to the chain. This process is known as mining. Successful nodes are rewarded with newly minted bitcoin currency each time the puzzle is completed. If any attempt to change a transaction is made, all nodes in the network would be required to expend a lot of new electricity in re-calculating each block’s transactions. Way too much, it turns out, to be an effective attack.

Proof of Stake

In Proof of Stake, nodes aren’t mining anything. Instead, they act as transaction validators. Each node puts down a payment using the cryptocurrency of the network. For each round of participation, the greater the amount staked, the greater are the chances of a node being selected by the algorithm to participate in block validation. The reward for then successfully validating a block is proportional to the amount staked.

Proof of X

There are many alternative consensus algorithms already in use or proposed, whose end result is agreement among participants, or a subset of them as to the state of the network. Proof of Storage uses a node’s ability to verify that it has sufficient hard-drive storage capacity to participate, systems such as Filecoin and Storj use this. Proof of Bandwidth is a similar idea, and there is also PBFT, Proof of Burn, Proof of Capacity, Proof of Elapsed Time and many others, each designed for a specific use-case.

Trustless Trust

We’ve seen that a system built on open source software, that provides open data via principles of open access can be trusted by users to secure itself for the benefit of applications running on top of it. But so far we have only discussed ‘classic’ blockchain implementations that are open and permissionless. But what about applications that require a blockchain backend to be unavailable and closed by design? Who on earth would want such a thing, and what would it look like?

The Enterprise In The Room

Classic enterprise organisations from SMEs to government have seen the benefits that decentralised consensus can bring to their business, and they want in. But enterprise can’t just let just anyone into their networks, or to view its data; a great number have extremely onerous compliance and regulation to contend with. So the sum-total of all participants’ ‘off-chain’ compliance needs to be fulfilled by the system itself. In such systems participants are usually represented by parties already known to one another outside the network; companies already involved in a service or supply chain for example. Because participants are already known, a large chunk of the unknowns are mitigated in one fell swoop.

A permissioned blockchain therefore restricts participants’ access to, and participation in, the network for the purposes of legislative compliance and traditional ideas around network and data security.

Now coming at these blockchain implementations armed only with a knowledge of open, permissionless implementations, confusion and a little suspicion is understandable. If a chain is hidden away behind a firewall, surely it loses all vestiges of blockchain’s (supposedly) distributed nature, and the data centre in which the network is located, however secure, becomes the honeypot to which attackers, savvy to the network’s physical location, are attracted? If the number of validating nodes within the network is only in the order of the hundreds (as it usually is in HyperLedger Fabric for instance), how good can the quality of consensus actually be?

It turns out that with careful study of projects suited to enterprise such as multi-party identity and supply-chain management between organisations already known to one another, that the type of DLT implementation and also the consensus mechanism at its heart, may need to differ or change. There are now sufficient permissioned and permissionless (and hybrid) implementations available that for common use-cases, there is already a solution suited to them.

This then leaves only cloud engineers to install and maintain them and developers to build applications on top of them. This is the way things should be, and is the way things are already for traditional web-based applications in most development shops.

HyperLedger

The HyperLedger Project is an open source collaboration between a group of global companies (IBM, Intel, Huawei and others) who set out to build a set of common tools, and a common language for blockchain and DLT technology. The project is maintained by the Linux Foundation and comprises eight different projects at present. Five of these are blockchains, some with pluggable consensus mechanisms, and three are modules used to build or integrate Hyperledger-based DLT technology with other systems.

From an open perspective, the downsides of permissioned chains - such as a perceived increased attack surface from non-geographically distributed nodes, the possibility that ‘open’ APIs into transactions might be filtered at source and that the total number of nodes is comparatively small (with even fewer actually participating in validation, under PBFT for example) - it seems that such systems are entirely missing the point of distributed consensus.

Let’s think about the problem from an Enterprise Architect’s perspective for a moment. The kinds of problems these organisations believe can be solved with DLT, will likely only ever concern a few organisations, all of whom are already known to one another. This means that Smart Contracts written in well-known languages like Go and Python and the ability for consensus engines to be pluggable, are just two bonuses on top of the sub-second transaction times, now achievable given the very low network latencies of your average datacentre.

Summary

The HyperLedger Projects are open source and have open governance at their core. Once they have greater momentum, especially from Enterprise users and those initially only familiar with permissionless systems such as Ethereum for example, the perceived problems will be addressed as they are encountered. Should developers still not feel happy about the state of a project, it can be forked, just like any other open source project.

Questions remain for the truly ‘open’ among us as to how effective permissioned systems can really be given even the small number of issues touched upon here. As always, time will tell.

A very healthy question to ask when considering DLT as part of any solution is “Can’t we just host a database and a web-based app?!”. Nine times out of then, the answer will be “probably”. But for the remainder, or if after asking, that the qualities of data immutability, multi-party interaction and strong assurances of data integrity are deemed critical, then we’d better talk.

About Russell

Russell is a Senior Developer on our SilverStripe bespoke development team. He is fascinated by the changes blockchain and its applications can bring about in the disruption and transformation of the world’s sociological and technological landscape.

Work In Progress

by Jess Freeman

On November 8, some Catalystas attended the Work in Progress Conference in Wellington. 

Photo of Jess and Graham

We had the privilege of listening to a whole day of incredible speakers who opened our minds to what the future of work looks like.

What will our new workforce look like? What portion of it will be human? What technologies will we have available? How is this affecting our economy, and how are we going to respond?

One thing that particularly struck a chord with us was what the future of education and workplace training will look like. There’s an obvious need for workplaces and education providers to shift tactics and be more responsive to the ‘new collar’ worker - the worker who will retrain and change jobs several times in their lifetime. How are our children, teenagers and young adults learning? Are they being adequately prepared for the job market 5, 10 or 20 years from now?

We certainly came away with a lot of questions and a lot of food for thought.

With a changing education system and a changing employment structure, we need the right tools to make sure the needs of our employees, employers, students and educators are being met.

Tools like Mahara, our ePortfolio system which is already used in around 1600 schools in New Zealand alone and many more around the world. Mahara is already used by many professions: nurses, doctors, lawyers, teachers - for managing their CPD. Learning systems like Moodle which has a huge range of functions to complement learning and professional development programmes. And Totara Learn, to address learning and performance management needs in almost any style that suits your own business.

We’ll be using Totara internally for our Catalyst staff so we can continue to adapt and be flexible for our employees’ benefit. This will assist them with their career growth, their learning, their personal development – all in a way that suits them. It must be learner-centric. It also instantly gives us the ability to manage the performance appraisals and manage talent throughout the organisation. We also have an internal Mahara instance already but we see this playing a much stronger part in our future.

We are so excited for the changes ahead of us!

Huge thanks to the Work In Progress team for organising such a great event. 

06 November 2017

Catalyst News

Catalyst Cloud joins OpenStack Passport Program

Today in Sydney, OpenStack public cloud providers from around the world, in collaboration with the OpenStack foundation, launched the OpenStack Public Cloud Passport program.

Openstack passport

01 November 2017

Catalyst News

Mahara 17.10: Advancements in reporting

We released the latest version of the ePortfolio platform Mahara on 30 October 2017. Mahara 17.10 comes with a number of new features for learners, educators, assessors, and administrators.

17 September 2017

Andrew Ruthven

Missing opkg status file on LEDE...

I tried to install LEDE on my home router which is running LEDE, only to be told that libc wasn't installed. Huh? What's going on?! It looked to all intents as purposes as though libc wasn't installed. And it looked like nothing was installed.

What to do if opkg list-installed is returning nothing?

I finally tracked down the status file it uses as being /usr/lib/opkg/status. And it was empty. Oh dear.

Fortunately the info directory had content. This means we can rebuild the status file. How? This is what I did:

cd /usr/lib/opkg/info
for x in *.list; do
pkg=$(basename $x .list)
echo $pkg
opkg info $pkg | sed 's/Status: .*$/Status: install ok installed/' >> ../status
done

And then for the special or virtual packages (such as libc and the kernel):

for x in *.control; do
pkg=$(basename $x .control)
if ! grep -q "Package: $pkg" ../status
then
echo $pkg is missing; cat $x >> ../status
fi
done

I then had to edit the file tidy up some newlines for the kernel and libc, and set the status lines correctly. I used "install hold installed".

Now I that I've shaved that yak, I can install tcpdump to try and work out why a VoIP phone isn't working. Joy.

02 September 2017

Andrew Ruthven

Network boot a Raspberry Pi 3

I found to make all this work I had to piece together a bunch of information from different locations. This fills in some of the blanks from the official Raspberry Pi documentation. See here, here, and here.

Image

Download the latest raspbian image from https://www.raspberrypi.org/downloads/raspbian/ and unzip it. I used the lite version as I'll install only what I need later.

To extract the files from the image we need to jump through some hoops. Inside the image are two partitions, we need data from each one.

 # Make it easier to re-use these instructions by using a variable
 IMG=2017-04-10-raspbian-jessie-lite.img
 fdisk -l $IMG

You should see some output like:

 Disk 2017-04-10-raspbian-jessie-lite.img: 1.2 GiB, 1297862656 bytes, 2534888 sectors
 Units: sectors of 1 * 512 = 512 bytes
 Sector size (logical/physical): 512 bytes / 512 bytes
 I/O size (minimum/optimal): 512 bytes / 512 bytes
 Disklabel type: dos
 Disk identifier: 0x84fa8189
 
 Device                               Boot Start     End Sectors  Size Id Type
 2017-04-10-raspbian-jessie-lite.img1       8192   92159   83968   41M  c W95 FAT32 (LBA)
 2017-04-10-raspbian-jessie-lite.img2      92160 2534887 2442728  1.2G 83 Linux

You need to be able to mount both the boot and the root partitions. Do this by tracking the offset of each one and multiplying it by the sector size, which is given on the line saying "Sector size" (typically 512 bytes), for example with the 2017-04-01 image, boot has an offset of 8192, so I mount it like this (it is VFAT):

 mount -v -o offset=$((8192 * 512)) -t vfat $IMG /mnt
 # I then copy the data off:
 mkdir -p /data/diskless/raspbian-lite-base-boot/
 rsync -xa /mnt/ /data/diskless/raspbian-lite-base-boot/
 # unmount the partition now:
 umount /mnt

Then we do the same for the root partition:

 mount -v -o offset=$((92160 * 512)) -t ext4 $IMG /mnt
 # copy the data off:
 mkdir -p /data/diskless/raspbian-lite-base-root/
 rsync -xa /mnt/ /data/diskless/raspbian-lite-base-root/
 # umount the partition now:
 umount /mnt

DHCP

When I first set this up, I used OpenWRT on my router, and I had to patch /etc/init/dnsmasq to support setting DHCP option 43. As of the writting of this article, a similar patch has been merged, but isn't in a release yet, and, well, there may never be another release of OpenWRT. I'm now running LEDE, and the the good news is it already has the patch merged (hurrah!). If you're still on OpenWRT, then here's the patch you'll need:

https://git.lede-project.org/?p=source.git;a=commit;h=9412fc294995ae2543fabf84d2ce39a80bfb3bd6

This lets you put the following in /etc/config/dnsmasq, this says that any device that uses DHCP and has a MAC issued by the Raspberry PI Foundation, should have option 66 (boot server) and option 43 set as specified. Set the IP address on option 66 to the device that should be used for tftp on your network, if it's the same device that provides DHCP then it isn't required. I had to set the boot server, as my other network boot devices are using a different server (with an older tftpd-hpa, I explain the problem further down).

 config mac 'rasperrypi'
         option mac 'b8:27:eb:*:*:*'
         option networkid 'rasperrypi'
         list dhcp_option '66,10.1.0.253'
         list dhcp_option '43,Raspberry Pi Boot'

tftp

Initially I used a version of tftpd that was too old and didn't support how the RPi tried to discover if it should use the serial number based naming scheme. The version of tftpd-hpa Debian Jessie works just fine. To find out the serial number you'll probably need to increase the logging of tftpd-hpa, do so by editing /etc/default/tftpd-hpa and adding "-v" to the TFTP_OPTIONS option. It can also be useful to watch tcpdump to see the requests and responses, for example (10.1.0.203 is the IP of the RPi I'm working with):

  tcpdump -n -i eth0 host 10.1.0.203 and dst port 69

This was able to tell me the serial number of my RPi, so I made a directory in my tftpboot directory with the same serial number and copied all the boot files into there. I then found that I had to remove the init= portion from the cmdline.txt file I'm using. To ease debugging I also removed quiet. So, my current cmdline.txt contains (newlines entered for clarity, but the file has it all on one line):

idwc_otg.lpm_enable=0 console=serial0,115200 console=tty1 root=/dev/nfs
nfsroot=10.1.0.253:/data/diskless/raspbian-lite-base-root,vers=3,rsize=1462,wsize=1462
ip=dhcp elevator=deadline rootwait hostname=rpi.etc.gen.nz

NFS root

You'll need to export the directories you created via NFS. My exports file has these lines:

/data/diskless/raspbian-lite-base-root	10.1.0.0/24(rw,no_root_squash,sync,no_subtree_check)
/data/diskless/raspbian-lite-base-boot	10.1.0.0/24(rw,no_root_squash,sync,no_subtree_check)

And you'll also want to make sure you're mounting those correctly during boot, so I have in /data/diskless/raspbian-lite-base-root/etc/fstab the following lines:

10.1.0.253:/data/diskless/raspbian-lite-base-root   /       nfs   rw,vers=3       0   0
10.1.0.253:/data/diskless/raspbian-lite-base-boot   /boot   nfs   vers=3,nolock   0   2

Network Booting

Now you can hopefully boot. Unless you into this bug, as I did. Where the RPi will sometimes fail to boot. Turns out the fix, which is mentioned on the bug report, is to put bootcode.bin (and only bootcode.bin) onto an SD card. That'll then load the fixed bootcode, and which will then boot reliably.

11 April 2017

Jonathan Harker

Australian Syrah then and now: current line-up

Tonight was the second part of a two-part tasting of Australian Shiraz with Geoff Kelly at Regional Wines & Spirits, the first being the 1996 library tasting (see previous post). This time we blind-tasted eleven new 2013-14 Australian Shiraz wines, including the Penfold Grange which is north of $850 per bottle, and with an Elephant Hill Hawke’s Bay 2013 Syrah thrown in to keep us honest.

Each wine was very well-built, young and purple, peppery and bold. Each wine had something to say, but unfortunately this time I exhausted my palate by the ninth, and couldn’t make head or tail of the last three. Shame, because although I liked them the Lloyd Reserve which I admired in the library tasting was hiding among them.

As we poured the blind wines into glasses, the colours of all the wines were good healthy young Syrah deep purple-red, although I could tell there would be something special about No. 6 and No. 9 just from the density of colour; No. 6 looked like you could stand a spoon up in it.

For me the remarkable wines were Nos. 3, 6, and 9.

No. 3 reminded me of a big, older-style blackcurrant jam Australian Shiraz, with lots of berry, ripe toffee and a long oaky finish. The minty, freshly-crushed basil leaf on the nose typical of South Australian Shiraz goes well; Geoff says if he likes it he calls it “mint”, or “eucalypt” otherwise. Someone else remarked this wine might be like Kylie crashing a Holden ute full of Foster’s into a blackberry patch. Enjoyable perhaps, but not especially subtle. No. 6 was the most beautifully dark rich purple-red, with an intoxicating, highly concentrated nose of mostly blackcurrant, but also warm florals and a whiff of rough-sawn timber. The wine itself was complex, initially spicy but with savoury meaty flavours and berries competing for space, with a longer finish. No. 9 for me was also a dense colour, with a peppery lavender on the nose and an interesting hint of baked dates or figs, not over-sweet but nicely integrated into the plum fruit flavours for a lingering complexity.

Once again we gathered some “wisdom of the crowd” data to see if as a group we could pick our wines, and this time we did a bit better; results are below.

Blind rating totals from the new 2013-14 Australian Syrah tasting.

The Penfolds Grange hiding at No. 6 was correctly identified by about half the group. I was overthinking things too much and was trying to re-taste the last three wines at this point, to find the rich, complex wine that would be a likely Grange candidate. I had assumed that, having never tasted it before, something as ludicrously expensive as the Grange might surely be less up in one’s grill with its big bold Aussie blackcurrants, so although No. 6 was beautifully dense and concentrated, I had assumed the Grange was busy being all sophisticated elsewhere. Once everyone’s hands shot up, however, it became clear the cat was out of the bag! The No. 9 I liked was the Elephant Hill 2014 Syrah Reserve, which surprised me, and the Lloyd Reserve from Coriole in McLaren Valley was hiding at No. 10, which was interesting to re-taste after The Grange. It has that torn basil leaf mint and lavender on the nose, with savory and plum, liquorice and a good long finish.

Of futher note was No. 11, the Cape Mentelle 2013 Shiraz from Margaret River in Western Australia. This was a more delicate wine than the others, with interesting and complex boquet of jasmine, perhaps roses, with a good plum fruit body and a nice mild spiciness like a hint of Christmas cake, with a good long-ish finish. It was certainly different enough from the others that three of us thought it was the Hawke’s Bay Syrah.

Herewith the full list of wines:

1. 2015 Wirra Wirra Shiraz Catapult, McLaren Vale, South Australia
2. 2013 Domaine Chandon Shiraz, Yarra Valley, Victoria
3. 2014 Burge Shiraz FilsellBarossa Valley,  SA
4. 2014 Two Hands Shiraz Gnarly Dudes, Barossa Valley, SA
5. 2014 John Duval Shiraz EntityBarossa & Eden Valley,  SA
6. 2012 Penfolds Shiraz Grange, Barossa Valley, SA
7. 2012 Wirra Wirra Shiraz RSWMcLaren Vale,  SA
8. 2012 Elderton Shiraz Command, Barossa Valley, SA
9. 2014 Elephant Hill Syrah ReserveHawkes Bay, New Zealand
10. 2013 Coriole Shiraz Lloyd Reserve, McLaren Vale, SA
11. 2013 Cape Mentelle Shiraz, Margaret River, West Australia
12. 2013 Seppelt Shiraz St Peters, Grampians, Victoria

30 March 2017

Jonathan Harker

Australian Syrah then and now: 1996 library tasting

Tonight we went to one of Geoff Kelly‘s illuminating wine tastings, held as ever at Regional Wines & Spirits next to the Basin Reserve in Wellington. This was part one of a two part tasting – a library tasting of 20 year-old Australian Shiraz wines, with a 1996 Hermitage thrown in as a yardstick; Next month part two will be a tasting of eleven new vintage Australian Shiraz with a good Hawke’s Bay Syrah to compare. Tonight was a blind tasting, in order to gather some interesting data from participants before revealing which wines were which.

It really is quite intimidating to try twelve magnificent 20 year-old red wines, and try to remain objective about comparing their colour and weight, nose (aroma), taste, complexity, and so on. As humans we’re notoriously bad at taste and smell compared to our other senses, so even just trying to identify the different flavours is a constant challenge. They are sometimes elusive or fleeting; there at the start, but then gone with the vapours a few minutes later. Sometimes they are maddeningly familiar, but the right word, recollection or label for it is just out of reach. Geoff, a true national treasure, runs a good show; reminding us not to speak too much aloud and cloud each others’ judgements, but dropping a few helpful hints and starting points to look for in aged reds, and Australian Syrah in particular, drawing on his 40 years of wine cellaring, judging, and writing.

Most of them were just as you’d imagine beautiful aged 20 year-old Syrah to be: plum or berry dominant, interesting florals, smooth, and tannins tamed by oak and time. That is, apart from No. 5 which to my nose was of fresh cowpat and sweaty horse. No. 7 to me had an unpleasant butyric bile odour, but it had weird almost salty savoury taste, like Parmigiano. My favourites were No. 3 for its sheer number and complexity of different and intriguing flavours, and its beautiful long velvety finish, and No. 8, which was a standout for me. It was the most purple-red of the set like it was only three years old, while all the others had aged to a fairly uniform red-ruby, near garnet colour. It had a bold nose of cognac, almond and cherry, with a slight floral element of jasmine and violets. Strong dark plum fruit but with a savoury hint of truffle, and its long-lingering tannins, whilst softened with the oak, were still unwinding even after all this time, and could probably go for another ten years.

Before revealing the wines, Geoff asked us to rate a first and second favourite, a least favourite, and which we thought was the French wine hiding in the glasses. This data set is tabulated below.

No. 5 was the 1996 Cape Mentelle from Margaret River, Western Australia, which might have had either a dose of brett or it was corked. No. 3 was the 1996 d’Arenberg Dead Arm from McLaren Vale, South Australia, and No. 8, my favourite, was the 1995 Coriole Lloyd Reserve, also from McLaren Vale. The No. 7 was the ludicrously expensive Hermitage (AOC Syrah from Rhône, France), the Jaboulet Hermitage La Chapelle; Jancis Robinson writes about this wine, here. Luckily for me, Regional Wines had a couple of the 2011 Lloyd Reserves in stock!

The full list of wines are detailed on Geoff’s library tasting page, and reproduced here:

1. 1996 Seppelt Shiraz Mount Ida, Heathcote, Victoria
2. 1996 Barossa Valley Estates E&E Shiraz Black Pepper, Barossa Valley
3. 1996 d’Arenberg Shiraz Dead-Arm, McLaren Vale, South Australia
4. 1996 Jim Barry Shiraz McRae WoodClare Valley, SA
5. 1996 Cape Mentelle Shiraz, Margaret River, West Australia
6. 1996 Burge Shiraz Meschach, Barossa Valley, SA
7. 1996 Jaboulet Hermitage La Chapelle, Northern Rhone Valley, France
8. 1995 Coriole Shiraz Lloyd’s Reserve, McLaren Vale, SA
9. 1996 Bannockburn Shiraz, Geelong, Victoria
10. 1997 Mount Langi Ghiran Shiraz Langi, Grampians, Victoria
11. 1996 Henschke Shiraz Mount EdelstoneEden Valley, SA
12. 1996 McWilliams Shiraz Maurice O’Shea, Hunter Valley, NSW

21 October 2016

Kristina Hoeppner

face

Getting the hang of hanging out (part 2)

A couple of days ago I experienced some some difficulties using YouTube Live Events. So today, I was all prepared:

  • Had my phone with me for 2-factor auth so I could log into my account on a second computer in order to paste links into the chat;
  • Prepared a document with all the links I wanted to paste;
  • Had the Hangout on my presenter computer running well ahead of time.

Indeed, I was done with my prep so much in advance that I had heaps of time and thus wanted to pause the broadcast as it looked like it was not actually broadcasting since I couldn’t see anything on the screen. So I thought I needed to adjust the broadcast’s start time.

Hence why I stopped the broadcast and as soon as I hit the button I knew I shouldn’t have. Stopping the broadcast doesn’t pause it, but stops it and kicks off the publishing process.

Yep, I panicked. I had about 10 minutes to go to my session and nobody could actually join it. Scrambling for a solution, I quickly set up another live event, tweeted the link and also sent it out to the Google+ group.

Then I changed the title of the just ended broadcast to something along the lines of “Go to description for new link”, put the link to the new stream into the description field and also in the chat as I had no other way of letting people know where I had gone and how they could join me.

I was so relieved when people showed up in the new event. That’s when the panic subsided, and I still had about 3 minutes to spare to the start of the session.

The good news? We released Mahara 16.10 and Mahara Mobile today (though actually, we soft-launched the app on the Google Play store already yesterday to ensure that it was live for today).

19 October 2016

Kristina Hoeppner

face

Getting the hang of hanging out (part 1)

Living in New Zealand, far, far away from the rest of the world (except maybe Australia), means that I’m doing a lot of online conference presentations, demonstrations, and meetings. I’ve become well-versed in a multitude of online meeting and conferencing software and know what works on Linux and what doesn’t.

The latter always give me a fright as I have to start up my VM and hope for the best that it will not die on me unexpectedly. Usually, closing Thunderbird and any browsers helps free some resources in order to let Windows start up. I can only dream of a world in which every conferencing software also runs on Linux.

Lately, some providers have gotten better and make use of WebRTC technology, which only requires a browser but no fancy additional software or flash. Only when I want to do screensharing do I need to install a plugin, which is done quickly.

So for meetings of fewer than 10 people, I’m usually set and can propose a nice solution like Jitsi, which works well. In the past, my go-to option was Firefox Hello for simple meetings, but that was taken off the market.

But what to do when there may be more than 10 people wanting to attend a session? Then it gets tough very quickly. So I have been trialling Google Hangouts on Air recently after I’ve seen David Bell use them successfully. It looked easy enough, but boy, was I in for a surprise.

Finding the dashboard

At some point, my YouTube account was switched to a “Creator Studio” one and so I can do live events. Google Hangouts on Air are now YouTube Live Events and need to be scheduled in YouTube.

There is no link from the YouTube homepage to the dashboard for uploading or managing content. I’d have thought that by clicking on “My channel” that I’d get somewhere, but far from it. There is nothing in the navigation.

The best choice is to click the “Video Manager” to get to a subpage of the creator area. Or, as I just found out, click your profile icon and then click the “Creator Studio” button.

Finding the creator dashboard

Getting to the creator dashboard either via the “Video Manager” on your channel or via the button under your profile picture.

Scheduling an event

Setting up an event is pretty straight forward as it’s like filling in the information for a video upload just with the added fields for event times.

Unfortunately, I haven’t found yet where I can change the placeholder for the video that is shown in the preview of the event on social media. It seems to set it to my channel’s banner image rather than allowing me to upload an event-specific image.

So once you have your event, you are good to go and can send people the link to it. The links that you get are only for the stream. They do not allow your viewers to actually join your hangout and communicate with you in there and that’s where it gets a bit bizarre and what prompted me to write this blog post so I can refer back to it in the future.

Different links for different hangouts

There is the hangout link and the YouTube event link

Streaming vs. Hangout

There are actually two components to the YouTube Live event (formerly known as Google Hangout on Air):

  1. The Hangout from which the presenter streams;
  2. The YouTube video stream that people watch.

In order to get into the Hangout, you click the “Start Hangout on Air” button on your YouTube events page. That takes you into a Google Hangout with the added buttons for the live event. You are supposed to see how many people joined in, but the count may be a bit off at times.

In that Google Hangout, you have all the usual functionality available of chats, screensharing, effects etc. You can also invite other people to join you in there. That will allow them to use the microphone. The interesting thing is that you can simply invite them via the regular Hangout invite. You can’t give them the link to the stream as they would not find the actual hangout. And if you only give people the link to the Hangout but not the stream, nobody will be in the stream.

Finding the relevant links in the hangout

You can also get the two different links from the hangout. Just make sure you get the correct one.

The YouTube video stream page only shows the content of the Hangout that is displayed in the video area, but not the chat. The live event has its separate chat that you can’t see in the Hangout! In order to see any comments your viewers make, you need to have the streaming page open and read the comments there.

In a way, it’s nice to keep the Hangout chat private because if you have other people join you in there as co-presenters, you can use that space to chat to each other without other viewers seeing what you type. However, it’s pretty inconvenient as you have to remember to check the other chat. Dealing with separate windows during a presentation can be daunting. It would be nicer to see the online chat also in the hangout window.

Today I even just fired up another computer and had the stream show there, which taught me another thing.

Having the stream on another computer also showed me how slow the connection was. The live event was at least 5 seconds behind if not more. That is something to consider when taking questions.

The stream was also very grainy. I was on a fast connection, but the default speed was on the lowest setting nevertheless. Fortunately, once I increased the resolution on the finished video, the video did get better. I don’t know if you could increase the setting during the stream.

Last but not least, I couldn’t present in full-screen mode as the window wouldn’t be recognized. I’ll have to try again and see if it works if I screenshare my entire desktop as it would be nicer not to show the browser toolbars.

Not sharing of links

When you are not the owner of the stream, you cannot post URLs. I’m pretty sure that is to prevent trolls misusing public YouTube events to post links. However, it’s pretty inconvenient for the rest who want to hold meetings and webinars and share content. You can’t post a single link. Only I as organizer could post links. Unfortunately, I found that out only after the event as I was logged in under a different account.

Being used to many other web conferencing software, I’ve come to like the backchannel and the possibility to post additional material, which are in many cases links, so people can simply click on them. This was impossible in the YouTube live event as I was only a regular user. And even had I logged in with my creator account, which I’ll certainly do during the next session on Friday, nobody else would have been able to post a link. That is very limiting. I wish it were possible to determine whether links were allowed or not.

Editing the stream

Once the event was over today, I went back to the video, but couldn’t find any editing tools. I started being discouraged as I had hoped to simply trim the front and the back a bit from non-essential chatter and then just keep the rest of the video online rather than trimming my local recording that I had done on top of the online recording, encoding that and uploading it. Before I could get sadder, I had to do some other work, and once I came back to the recording, I suddenly had all my regular editing tools available and rejoiced. Apparently, it takes a bit until all functionality is at your disposal.

So I trimmed the video, which was not easy, but I managed. And then it did its encoding online. After some time, the shortened recording was available and I didn’t have to send out a new link to the video. 🙂

Summing up

What does that mean for the next live event with YouTube events?

  1. Click the “Creator Studio” button under my Google / YouTube profile to get to the editor dashboard easily.
  2. Invite people who should have audio privileges through the Hangout rather than giving them the YouTube Live link, which is displayed more prominently.
    • Co-presenters are invited via Hangout.
    • Viewers get the YouTube live link.
  3. Open the YouTube Live event with the event creator account in order to be able to post links in the chat on YouTube. Have both the Hangout and the YouTube Live event open so you can see the online chat of those who aren’t in the Hangout.
  4. Take into account that there is a delay until the content is shown on YouTube.
  5. Once finished, wait a bit until all editing features are available and then go into post-production.

Remembering all these things will put me into a better position for the next webinar, which is a repeat session of today’s and showcases the new features of Mahara 16.10.

Update: Learn some more about YouTube Live events from my second webinar.

14 October 2016

Jonathan Harker

Learning the contrabass trombone

Wessex Contrabass in F and Shires bass trombone, side by side.

I’ve recently acquired a Wessex contrabass trombone in F. It is pretty much a knock-off of the Thein Ben van Dijk model, and compared to this gold standard of contrabass trombone, this instrument is about an eighth of the price and a perfectly decent instrument. It plays really well throughout the range and the slide, valves and bell are all of high build quality, unlike the notorious Chinese-made instruments of the past.

But really, this post is just an excuse to test out a nifty music notation WordPress plugin. The shorthand it uses is ABC which is a bit quaint compared to Lilypond, but it seems to work well enough. For instance, take the first scale we might learn on a contrabass trombone:

The contrabass trombone in F only has six positions on the open slide instead of seven. Furthermore, only the first five are actually practical, unless you are Tarzan, so we can play the G on the first (D) valve in third position. While the A is also theoretically available in first position on the D valve, it is indistinct and slightly flat. Play it on the open slide in fourth. Good. Now, how about an excerpt from Ein Alpensinfonie by Richard Strauss:

Sounds good! Now, pop along to the NZSO performance in March 2017 to hear Shannon playing it, live in concert! In the meantime, here’s this excerpt by Berlin Philharmoniker:

11 October 2016

Kristina Hoeppner

face

Mahara Hui @ AUT recap

I’m playing catch-up and working my way backwards of my events. Yesterday, I wrote a bit about the NZ MoodleMoot on 5 October 2016. Just a day before that, AUT organized a local half-day Mahara Hui, Mahara Hui @ AUT 2016. Lisa Ransom and Shen Zhang from CfLAT (Centre for Learning and Teaching) were responsible for the event and did well wrangling everything and made all attendees feel welcome.

It was great to catch up with lecturers and learning technology support staff from AUT, Unitec and University of Waikato, and with a user from Nurseportfolio. We started the day out with introductions and examples of how people use Mahara.

Mahara in New Zealand tertiaries

At AUT, the CfLAT team trained about 630 students this academic year, in particular Public Policy, Tourism and Midwifery. Paramedics are also starting to use ePortfolios and can benefit from the long experience that Lisa and Shen have supporting other departments at AUT.

Linda reported that Mahara is now also being used in culinary studies in elective courses as well as degree papers. They use templates to help students get started, but then let them run with it. Portfolios are well suited for culinary students as they can showcase their work as well as document their creation progress and improve their work.

She also showcased a portfolio from a new lecturer who became a student in her area of expertise, going through a portfolio assignment with her students to see for herself how the portfolios worked and what she could and wanted to expect from her students. By going through the activity herself, she became an expert and now has a better understanding of the portfolio work.

John, an AUT practicum leader, who was new to AUT, came along to the hui and said that they were starting to use portfolios for their lesson plans and goals. Reflections are expected from the future teachers and form an important aspect. I’m sure we’ll hear more from him.

Sally from Nursing at AUT is looking at Mahara again, and the instructor could form connections directly with Unitec and Nurseportfolio, which is fantastic, because that’s what these hui are about: Connecting people.

JJ updated the group on the activities at Unitec. Medical imaging is going digital and looking into portfolios, and they also created a self-paced Moodle course on how to teach with Mahara effectively so that lecturers at Unitec can get a good overview.

Stephen from the University of Waikato gave an overview of the portfolio activities  at his university. Waikato still works with two systems, MyPortfolio.school.nz for education students becoming teachers, and the new Waikato-hosted Mahara site. Numerous faculties at Waikato now work with portfolios. If you’d like to find out more directly, you can watch recordings from the last WCELfest, in particular the presentations by Richard Edwards, Sue McCurdy and Stephen Bright. Portfolios will be used even more in the future as evidence from general papers will need to be collected in them by every student.

We also discussed a couple of ideas from a lecturer and are interested in other people’s opinion on them. One idea was to be able to share portfolios more easily in social networks and then see directly when the portfolio was updated and share those news again. The other idea was to show people who are interested in the portfolios when new content has been added. The latter is already possible to a degree with the watchlist. However, there students or lecturers still need to put specific pages on the watchlist first rather than the changes coming to them. The enhancements that Gregor is planning for the watchlist goes more in that direction.

Mahara 16.10

In a second part of the hui, I presented the new features of Mahara 16.10, and we spent a bit of time on taking a closer look at SmartEvidence.

I’m very excited that this new version will be live very soon and look forward to the feedback by users on how SmartEvidence works out for them. It’s the initial implementation. While it doesn’t contain all the bells and whistles, I think it is a great beginning to get the conversations started around use cases besides the ones we had and see how flexible it is.

Next hui and online meetings

If you want to share how you are using Mahara, you’ll have the opportunity to do so in Wellington on 27 October 2016 when we’ll have another local Mahara Hui, Mahara Hui @ Catalyst. From 5 to 7 April 2017, we are planning a bigger Mahara Hui again in Auckland. More information will be shared soon on the Mahara Hui website.

There will also be two MUGOZ online meetings on 19 and 21 October 2016 in which I’ll be presenting the new Mahara 16.10 features. You are welcome to attend either of these 1-hour sessions organized by the Australian Mahara User Group. Since the sessions are online, anybody can tune in.

24 July 2016

Andrew Ruthven

Allow forwarding from VoiceMail to cellphones

Something I've been wanting to do with our Asterisk PBX at Catalyst for a while is to allow having callers that hit VoiceMail to be forwarded the callee's cellphone if allowed. As part of an Asterisk migration we're currently carrying out I finally decided to investigate what is involved. One of the nice things about the VoiceMail application in Asterisk is that callers can hit 0 for the operator, or * for some other purpose. I decided to use * for this purpose.

I'm going to assume a working knowledge of Asterisk dial plans, and I'm not going to try and explain how it works. Sorry.

When a caller hits * the VoiceMail application exits and looks for a rule that matches a. Now, the simple approach looks like this within our macro for handling standard extensions:

[macro-stdexten]
...
exten => a,1,Goto(pstn,027xxx,1)
...

(Where I have a context called pstn for placing calls out to the PSTN).

This'll work, but anyone who hits * will be forwarded to my cellphone. Not what I want. Instead we need to get the dialled extension into a place where we can perform extension matching on it. So instead we'll have this (the extension is passed into macro-stdexten as the first variable - ARG1):

[macro-stdexten]
...
exten => a,1,Goto(vmfwd,${ARG1},1)
...

Then we can create a new context called vmfwd with extension matching (my extension is 7231):

[vmfwd]
exten => 7231,1,Goto(pstn,027xxx,1)

I actually have a bit more in there to do some logging and set the caller ID to something our SIP provider will accept, but you get the gist of it. All I need to do is to arrange for a rule per extension that is allowed to have their VoiceMail callers be forwarded to voicemail. Fortunately I have that part automated.

The only catch is for extensions that aren't allowed to be forwarded to a cellphone. If someone calling their VoiceMail hits * their call will be hung up and I get nasty log messages about no rule for them. How do we handle them? Well, we send them back to VoiceMail. In the vmfwd context we add a rule like this:

exten => _XXXX,1,VoiceMail(${EXTEN}@sip,${voicemail_option})
  same => n,Hangup

So any extension that isn't otherwise matched hits this rule. We use ${voicemail_option} so that we can use the same mode as was used previously.

Easy! Naturally this approach won't work for other people trying to do this, but given I couldn't find write ups on how to do this, I thought it be might be useful to others.

Here's my macro-stdexten and vmfwd in full:

[macro-stdexten]
exten => s,1,Progress()
exten => s,n,Dial(${ARG2},20)
exten => s,n,Goto(s-${DIALSTATUS},1)
exten => s-NOANSWER,1,Answer
exten => s-NOANSWER,n,Wait(1)
exten => s-NOANSWER,n,Set(voicemail_option=u)
exten => s-NOANSWER,n,Voicemail(${ARG1}@sip,u)
exten => s-NOANSWER,n,Hangup
exten => s-BUSY,1,Answer
exten => s-BUSY,n,Wait(1)
exten => s-BUSY,n,Set(voicemail_option=b)
exten => s-BUSY,n,Voicemail(${ARG1}@sip,b)
exten => s-BUSY,n,Hangup
exten => _s-.,1,Goto(s-NOANSWER,1)
exten => a,1,Goto(vmfwd,${ARG1},1)
exten => o,1,Macro(operator)

[vmfwd]

exten => _XXXX,1,VoiceMail(${EXTEN}@sip,${voicemail_option})
  same => n,Hangup

#include extensions-vmfwd-auto.conf

And I then build extensions-vmfwd-auto.conf from a script that is used to generate configuration files for defining accounts, other dial plan rule entries and phone provisioning files.

With thanks to John Kiniston for the suggestion about the wildcard entry in vmfwd.

25 August 2014

Dan Marsden

SCORM hot topics.

As a follow up from the GSOC post I thought it might be useful to mention a few things happening with SCORM at the moment.

There are currently approx 71 open issues related to SCORM in the Moodle tracker at the moment, of those 38 are classed as bugs/issues I should fix in stable branches at some point, 33 are issues that are really feature/improvement requests.

Issues about to be fixed and under development
MDL-46639 – External AICC packages not working correctly.
MDL-44548 – SCORM Repository auto-update not working.

Issues that are  high in my list of things to look at and I hope to look at sometime soon.
MDL-46961 – SCORM player not launching in Firefox when new window being used.
MDL-46782 – Re-entry of a scorm not using suspend_data or resuming itself should allow returning to the first sco that is not complete.
MDL-45949 – The TOC Tree isn’t quite working as it should after our conversion to YUI3 – it isn’t expanding/collapsing in a logical manner – could be a bit of work here to make this work in the right way.

Issues recently fixed in stable releases.
MDL-46940 – new window option not working when preview mode disabled.
MDL-46236 – Start new attempt option ignored if new window used.
MDL-45726 – incorrect handling of review mode.

New improvements you might not have noticed in 2.8 (not released yet)
MDL-35870 -Performance improvements to SCORM
MDL-37401 -SCORM auto-commit – allows Moodle to save data periodically even if the SCORM doesn’t call “commit”

New improvements you might not have noticed in 2.7:
MDL-28261 -Check for live internet connectivity while using SCORM – warns user if SCORM is unable to communicate with the LMS.
MDL-41476 – The SCORM spec defines a small amount of data that can be stored when using SCORM 1.2 packages, we have added a setting that allows you to disable this restriction within Moodle to allow larger amounts of data to be stored (you may need to modify your SCORM package to send more data to make this work.)

Thanks to Ian Wild, Martin Holden, Tony O’Neill, Peter Bowen, André Mendes, Matteo Scaramuccia, Ray Morris, Vignesh, Hansen Ler, Faisal Kaleem and many other people who have helped report/test and suggest fixes related to SCORM recently including the Moodle HQ Integration team (Eloy, Sam, Marina, Dan, Damyon, Rajesh) who have all been on the receiving end of reviewing some SCORM patches recently!

GSOC 2014 update

Another year of GSOC has just finished and Vignesh has done a great job helping us to improve a number of areas of SCORM!
I’m really glad to finally have some changes made to the JavaScript datamodel files as part of MDL-35870 – I’m hoping this will improve the performance of the SCORM player as the JavaScript can now be cached properly by the users browser rather than dynamically generating it using PHP.

Vignesh has made a number of general bug fixes to the SCORM code and has also tidied up the code in the 2.8 branch so that it now complies with Moodle’s coding guidelines.

These changes have involved almost every single file in the SCORM module and significant architectural changes have been made. We’ve done our best to avoid regresssions (thanks Ray for testing SCORM 2004) but due to the large number of changes (and the fact that we only have 1 behat test for SCORM) It would be really great if people could test the 2.8 branch with their SCORM content before release so we can pick up any other regressions that may have occurred.

Thanks heaps to Vignesh for his hard work on SCORM during GSOC – and kudos to Google for running a great program and providing the funding to help it happen!

10 July 2014

Dan Marsden

Goodbye Turnitin…

Time to say goodbye to the “Dan Marsden Turnitin plugin”… well almost!

Turnitin have done a pretty good job of developing a new plugin to replace the code that I have been working on since Moodle 1.5!

The new version of their plugin contains 3 components:

  1. A module (called turnitintool2) which contains the majority of the code for connecting to their new API and is a self-contained activity like their old “turnitintool” plugin
  2. A replacement plugin for mine (plagiarism_turnitin) which allows you to use plagiarism features within the existing Moodle Assignment, Workshop and forum modules.
  3. A new Moodle block that works with both the above plugins.

The Moodle.org Plugins database entry has been updated to replace my old code with the latest version from Turnitin, we have a number of clients at Catalyst using the new plugin and the migration has mostly gone ok so far – there are a few minor differences between my plugin and the new version from Turnitin so I encourage everyone to test the upgrade to the new version before running it on their production sites.

I’m encouraging most of our clients to update to the new plugin at the end of this year but I will continue to provide basic support for my version running on all Moodle versions up to Moodle 2.7 and my code continues to be available from my github repository here:
https://github.com/danmarsden/moodle-plagiarism_turnitin

Thanks to everyone who has helped in the past with the plugin I wrote – hopefully this new version from Turnitin will meet everyone’s needs!

31 October 2012

Chris Cormack

Signoff statistics for October 2012

Here are the signoff statistics for bugs in October 2012
  • Kyle M Hall- 24
  • Owen Leonard- 18
  • Chris Cormack- 15
  • Nicole C. Engard- 10
  • Mirko Tietgen- 9
  • Marc Véron- 6
  • Frédéric Demians- 5
  • Jared Camins-Esakov- 5
  • Magnus Enger- 4
  • Jonathan Druart- 4
  • M. de Rooy- 3
  • Melia Meggs- 3
  • wajasu- 2
  • Paul Poulain- 2
  • Fridolyn SOMERS- 2
  • Tomás Cohen Arazi- 2
  • Matthias Meusburger- 1
  • Katrin Fischer- 1
  • Julian Maurice- 1
  • Koha Team Lyon 3- 1
  • Mason James- 1
  • Elliott Davis- 1
  • mathieu saby- 1
  • Robin Sheat- 1

16 October 2012

Chris Cormack

Unsung heroes of Koha 26 – The Ada Lovelace Day Edition

Darla Grediagin

Darla has been using Koha from 2006, for the Bering Strait School District in Alaska. This is pretty neat in itself, what is cooler is that as far as I know, they have never had a ‘Support Contract’. Doing things either by themselves or with the help of IT personnel as needed. One of Darla’s first blogposts that I read was about her struggles trying to install Debian on an Emac. I totally respect anyone who is trying to reclaim hardware from the darkside 🙂

Darla has presented on Koha at conferences, and maintains a blog that has useful information, including sections of what she would do differently. As well as some nice feel good bits like this, from April 2007

I know I had an entry titled this before, but I do love OSS programs.   Yesterday I mentioned that I would look at Pines because I like the tool it has to merge MARC records.  Today a Koha developer emailed me to let me know that he is working on this for Koha and it should be available soon.  I can’t imagine getting that kind of service from a vendor.

Hopefully she will be able to make it Kohacon13 in Reno, NV. It would be great to put a face to the email address 🙂


10 October 2012

Chris Cormack

New Release team for Koha 3.12

Last night on IRC the Koha Community elected a new release team, for the 3.12 release. Once again it is a nicely mixed team, there are 16 people involved, from  8 different countries (India, New Zealand, USA, Norway, Germany, France, Netherlands, Switzerland) and four of the 12 roles are filled by women.

The release team will be working super hard to bring you the best release of Koha yet, and you can help:

  • Reporting bugs
  • Testing bug fixes
  • Writing up enhancement requests
  • Using Koha
  • Sending cookies
  • Inventing time travel
  • Killing MARC
  • Winning the lottery and donating the proceeds to the trust to use for Koha work.

24 July 2012

Pass the Source

Google Recruiting

So, Google are recruiting again. From the open source community, obviously. It’s where to find all the good developers.

Here’s the suggestion I made on how they can really get in front of FOSS developers:

Hi [name]

Just a quick note to thank you for getting in touch of so many our
Catalyst IT staff, both here and in Australia, with job offers. It comes
across as a real compliment to our company that the folks that work here
are considered worthy of Google’s attention.

One thing about most of our staff is that they *love* open source. Can I
suggest, therefore, that one of the best ways for Google to demonstrate
its commitment to FOSS and FOSS developers this year would be to be a
sponsor of the NZ Open Source Awards. These have been very successful at
celebrating and recognising the achievements of FOSS developers,
projects and users. This year there is even an “Open Science” category.

Google has been a past sponsor of the event and it would be good to see
you commit to it again.

For more information see:

http://www.nzosa.org.nz/

Many thanks
Don

09 July 2012

Andrew Caudwell

Inventing On Principle Applied to Shader Editing

Recently I have been playing around with GLSL Sanbox (github), a what-you-see-is-what-you-get shader editor that runs in any WebGL capable browser (such as Firefox, Chrome and Safari). It gives you a transparent editor pane in the foreground and the resulting compiled fragment shader rendered behind it. Code is recompiled dynamically as the code changes. The latest version even has syntax and error highlighting, even bracket matching.

There have been a few other Webgl based shader editors like this in the past such as Shader Toy by Iñigo Quílez (aka IQ of Demo Scene group RGBA) and his more recent (though I believe unpublished) editor used in his fascinating live coding videos.

Finished compositions are published to a gallery with the source code attached, and can be ‘forked’ to create additional works. Generally the author will leave their twitter account name in the source code.

I have been trying to get to grips with some more advanced raycasting concepts, and being able to code something up in sandbox and see the effect of every change is immensely useful.

Below are a bunch of my GLSL sandbox creations (batman symbol added by @emackey):

    

    

GLSL Sandbox is just the latest example of the merit of software development tools that provide immediate feedback, and highlights the major advantages of scripting languages have over heavy compiled languages with long build and linking times that make experimentation costly and tedious. Inventing on Principle, a presentation by Bret Victor, is a great introduction to this topic.

I would really like a save draft button that saves shaders locally so I have some place to save things that are a work in progress, I might have to look at how I can add this.

Update: Fixed links to point at glslsandbox.com.

05 June 2012

Pass the Source

Wellington City Council Verbal Submission

I made the following submission on the Council’s Draft Long Term Plan. Some of this related to FLOSS. This was a 3 minute slot with 2 minutes for questions from the councillors.

Introduction

I have been a Wellington inhabitant for 22 years and am a business owner. We employ about 140 staff in Wellington, with offices in Christchurch, Sydney, Brisbane and the UK. I am also co-chair of NZRise which represents NZ owned IT businesses.

I have 3 Points to make in 3 minutes.

1. The Long Term plan lacks vision and is a plan for stagnation and erosion

It focuses on selling assets, such as community halls and council operations and postponing investments. On reducing public services such as libraries and museums and increasing user costs. This will not create a city where “talent wants to live”. With this plan who would have thought the citizens of the city had elected a Green Mayor?

Money speaks louder than words. Both borrowing levels and proposed rate increases are minimal and show a lack of investment in the city, its inhabitants and our future.

My company is about to open an office in Auckland. A manager was recently surveying staff about team allocation and noted, as an aside, that between 10 and 20 Wellington staff would move to Auckland given the opportunity. We are not simply competing with Australia for hearts and minds, we are competing with Auckland whose plans for investment are much higher than our own.

2. Show faith in local companies

The best way to encourage economic growth is to show faith in the talent that actually lives here and pays your rates. This means making sure the council staff have a strong direction and mandate to procure locally. In particular the procurement process needs to be overhauled to make sure it does not exclude SME’s (our backbone) from bidding for work (see this NZCS story). It needs to be streamlined, transparent and efficient.

A way of achieving local company participation in this is through disaggregation – the breaking up large-scale initiatives into smaller, more manageable components. For the following reasons:

  • It improves project success rates, which helps the public sector be more effective.
  • It reduces project cost, which benefits the taxpayers.
  • It invites small business, which stimulates the economy.

3. Smart cities are open source cities

Use open source software as the default.

It has been clear for a long time that open source software is the most cost effective way to deliver IT services. It works for Amazon, Facebook, Red Hat and Google and just about every major Silicon Valley success since the advent of the internet. Open source drives the internet and these companies because it has an infinitely scalable licensing and model – free. Studies, such as the one I have here from the London School of Economics, show the cost effectiveness and innovation that comes with open source.

It pains me to hear about proposals to save money by reducing libraries hours and increasing fees, when the amount of money being saved is less than the annual software licence fees our libraries pay, when world beating free alternatives exist.

This has to change, looking round the globe it is the visionary and successful local councils that are mandating the use of FLOSS, from Munich to Vancouver to Raleigh NC to Paris to San Francisco.

As well as saving money, open source brings a state of mind. That is:

  • Willingness to share and collaborate
  • Willingness to receive information
  • The right attitude to be innovative, creative, and try new things

Thank you. There should now be 2 minutes left for questions.

05 January 2012

Pass the Source

The Real Tablet Wars

tl;dr formally known as Executive Summary, Openness + Good Taste Wins

Gosh, it’s been a while. But this site is not dead. Just been distracted by indenti.ca and twitter.

I was going to write about Apple, again. A result of unexpected and unwelcome exposure to an iPad over the Christmas Holidays. But then I read Jethro Carr’s excellent post where he describes trying to build the Android OS from Google’s open source code base. He quite mercilessly exposes the lack of “open” in some key areas of that platform.

It is more useful to look at the topic as an issue of “open” vs “closed” where iPad is one example of the latter. But, increasingly, Android platforms are beginning to display similar inane closed attributes – to the disadvantage of users.

Part of my summer break was spent helping out at the premier junior sailing regatta in the world, this year held in Napier, NZ. Catalyst, as a sponsor, has built and is hosting the official website.

I had expected to swan around, sunbathing, drinking cocktails and soaking up some atmosphere. Instead a last minute request for a new “live” blogging section had me blundering around Joomla and all sorts of other technology with which I am happily unfamiliar. Days and nightmares of iPads, Windows, wireless hotspots and offshore GSM coverage.

The plan was simple, the specialist blogger, himself a world renown sailor, would take his tablet device out on the water on the spectator boat. From there he would watch and blog starts, racing, finishes and anguished reactions from parents (if there is one thing that unites races and nationalities, it is parental anguish over sporting achievement).

We had a problem in that the web browser on the tablet didn’t work with the web based text editor used in the Joomla CMS. That had me scurrying around for a replacement to the tinyMCE plugin, just the most common browser based editing tool. But a quick scan around various forums showed me that the alternative editors were not a solution and that the real issue was a bug with the client browser.

“No problem”, I thought. “Let’s install Firefox, I know that works”.

But no, Firefox is not available to iPad users  and Apple likes to “protect” its users by only tightly controlling whose applications are allowed to run on the tablet. Ok, what about Chrome? Same deal. You *have* to use Apple’s own buggy browser, it’s for your own good.

Someone suggested that the iPad’s operating system we were using needed upgrading and the new version might have a fixed browser. No, we couldn’t do that because we didn’t have Apple’s music playing software, iTunes, on a PC. Fortunately Vodafone were also a sponsor and not only did they download an upgrade they had iTunes handy. Only problem, the upgrade wiped all the apps that our blogger and his family had previously bought and installed.

Er, and the upgrade failed to fix the problem. One day gone.

So a laptop was press ganged into action, which, in the end was a blessing because other trials later showed that typing blogs fast, on an ocean swell, is very hard without a real keyboard. All those people pushing tablets at schools, keep in mind it is good to have our children *write* stuff, often.

The point of this post is not really to bag Apple, but to bag the mentality that stops people using their own devices in ways that help them through the day. I only wanted to try a different browser to Safari, not an unusual thing to do. Someone else might want to try out a useful little application a friend has written for them, but that wouldn’t be allowed.

But the worst aspect of this is that because of Apple’s success in creating well designed gadgets other companies have decided that “closed” is also the correct approach to take with their products. This is crazy. It was an open platform, Linux Kernel with Android, that allowed them to compete with Apple in the first place and there is no doubt that when given a choice, choice is what people want – assuming “taste” requirements are met.

Other things being equal*, who is going to chose a platform where the company that sold you a neat little gadget controls all the things you do on it? But there is a strong trend by manufacturers such as Samsung, and even Linux distributions, such asUbuntu, to start placing restrictions on their clients and users. To decide for all of us how we should behave and operate *our* equipment.

The explosive success of the personal computer was that it was *personal*. It was your own productivity, life enhancing device. And the explosive success of DOS and Windows was that, with some notable exceptions, Microsoft didn’t try and stop users installing third party applications. The dance monkey boy video is funny, but the truth is that Microsoft did want “developers, developers, developers, developers” using its platforms because, at the time, it knew it didn’t know everything.

Apple, Android handset manufacturers and even Canonical (Ubuntu) are falling into the trap of not knowing that there is stuff they don’t know and they will probably never know. Similar charges are now being made about Facebook and Twitter. The really useful devices and software will be coming from companies and individuals who realise that whilst most of what we all do is the same as what everyone else does, it is the stuff that we do differently that makes us unique and that we need to control and manage for ourselves. Allow us do that, with taste, and you’ll be a winner.

PS I should also say “thanks” fellow sponsors Chris Devine and Devine Computing for just making stuff work.

* I know all is not equal. Apple’s competitive advantage it “has taste” but not in its restrictions.

18 May 2011

Andrew Caudwell

Show Your True Colours

This last week saw the release of fairly significant update to Gource – replacing the out dated, 3DFX-era rendering code, with something a bit more modern, utilizing more recent OpenGL features like GLSL pixel shaders and VBOs.

A lot of the improvements are under the hood, but the first thing you’ll probably notice is the elimination of banding artifacts in Bloom, the illuminated fog Gource places around directories. This effect is pretty tough on the ‘colour space’ of so called Truecolor, the maximum colour depth on consumer monitors and display devices, which provides 256 different shades of grey to play with.

When you render a gradient across the screen, there are 3 or 4 times more pixels than there are shades of each colour, producing visible ‘bands’ of the same shade. If multiple gradients like this get blended together, as happens with bloom, you simply run out of ‘in between’ colours and the issue becomes more exaggerated, as seen below (contrast adjusted for emphasis):

        

Those aren’t compression artifacts you’re seeing!

Gource now uses colour diffusion to combat this problem. Instead of sampling the exact gradient of bloom for the distance of a pixel from the centre of a directory, we take a fuzzy sample in that vicinity instead. When zoomed in, you can see the picture is now slightly noisy, but the banding is completely eliminated. Viewed at the intended resolution, you can’t really see the trickery going on – in fact the effect even seems somewhat more natural, a bit closer to how light bouncing off particles of mist would actually behave.

        

The other improvement is speed – everything is now drawn with VBOs, large batches of objects geometry passed to the GPU in as few shipments as possible, eliminating CPU and IO bottle necks. Shadows cast by files and users are now done in a second pass on GPU using the same geometry as used for the lit pass – making them really cheap compared to before when we effectively wore the cost of having to draw the whole scene twice.

Text is now drawn in single pass, including shadows, using some fragment shader magic (take two samples of the font texture, offset by 1-by-1 pixels, blend appropriately). Given the ridiculous amount of file, user and directory names Gource draws at once with some projects (Linux Kernel Git import commit, I’m looking at you), doing half as much work there makes a big difference.

06 October 2010

Andrew Caudwell

New Zealand Open Source Awards

I discovered today that Gource is a finalist in the Contributor category for the NZOSA awards. Exciting stuff! A full list of nominations is here.

I’m currently taking a working holiday to make some progress on a short film presentation of Gource for the Onward!.

Update: here’s the video presented at Onward!:

Craig Anslow presented the video on my behalf (thanks again Craig!), and we did a short Q/A over Skype afterwards. The music in the video is Aksjomat przemijania (Axiom of going by) by Dieter Werner. I suggest checking out his other work!

14 August 2009

Piers Harding

Auth SAML 2.0 for Mahara

Following on from the SAML 2.0 work that I've done recently for Moodle, I thought it was useful to do the same for the Mahara ePortfolio service, while I was in the same space. Details of the first release can be found here, with tested version for both trunk, and 1.1_STABLE.

02 August 2009

Piers Harding

Moodle and SAML 2.0 Web SSO

Of late I have been doing a lot of SSO integration work for the NZ Ministry of Education, and during this time I came across an excellent project FEIDE. One of the off shoots of this has been the development of a high quality PHP library for SAML 2.0 Web SSO - SimpleSAMLPHP.

For Moodle integration, Erlend Strømsvik of Ny Media AS, developed an authentication plugin, which I've made a number of changes to around configuration options, and Moodle session integration. This has now been documented and added to Moodle Contrib to give it better visibility to the Moodle community at large. Documentation is here and the contrib entry is here.

27 June 2009

Piers Harding

Perl sapnwrfc 0.30

I doing some work for a client recently, I got the opportunity to do some major performance work on sapnwrfc for Perl. The net result is that a number of memory leaks, mainly of Perl values not going out of scope properly, have been fixed.

Additionally, I've had some time to put together a proper cookbook style set of examples in the sapnwrfc-cookbook. These examples, while specifically for Perl, are almost identical for sapnwrfc for Python, Ruby, and PHP too.