Catalyst IT Limited  
Planet Catalyst
 

18 July 2018

Catalyst Blog

face

Agile and Test-Driven Development

by Daniel Roperto

The spirit of Agile development is to allow changes to happen when they are needed.

The biggest benefit of running a test-driven development process is that the project does not have to stop in order to make a change.

Clean, tidy, and efficient. This is a volkswagen beetle assembly line

Before thinking about Agile let’s look at another common approach to software development.

The Waterfall Method

The Waterfall method can be extremely efficient. A project is broken into several steps, each step is completed by a specialist in that area and sent on as input for the next step. It provides a high throughput and the results can be high quality.

In the traditional software development life cycle, these steps are generally defined as:

  1. Requirements
  2. Design
  3. Implementation
  4. Verification
  5. Maintenance

In the requirements step the analyst will collect all things that need to be done, which will be passed to a software engineer to design a system based on that. The design is then sent to a programmer who will write code as specified.

The Waterfall approach - Diagram showing flow from one stage to the next.

So, is Agile just a buzzword? 

Agile is not the most efficient methodology, the waterfall approach can potentially deliver desired results in less time. So is Agile just a buzzword? When it comes to software development, everything always changes, even constants change. Waterfall is great for mass-production but really bad when modifications are needed, and in the world of software, modifications are always needed. Blaming the initial specification for being wrong does not solve the problem if there is a genuine interest in delivering value and responding to changing requirements.

In software development there is no need for mass-production, due to its digital nature it can be copied at anytime at no additional cost. The focus should be on developing the right thing at the right time, and that is what an Agile approach strives to achieve. In this respect, is very effective.

Let’s consider how we might go about building  a car from an Agile paradigm.

Henrik Kniberg created this useful graphic which clearly demonstrates the idea of the Minimum Viable Product, or MVP. He went on to write a hugely influential blog post that literally illustrates the thinking behind the MVP concept. Instead of documenting an extensive list of requirements, we will get the main idea of what is being done: a vehicle to move from point A to point B. We could deliver a skateboard really quickly, and then, based on people’s feedback we can evolve that product towards what is really being requested. It will definitely take longer (less efficient) than building a car straight away but it will also ensure we create what the customer wants (more effective). As an added bonus, the client can already use something (delivered value) from early stages of the project.

Diagram comparing building a car starting with one wheel, compared with starting by delivering a skateboard.

It all looks great in theory, but as soon as we start making major modifications to the software in order to respond to changes, it will break. Probably around one week after development started the software will start to get enough complexity that a major change will require major refactoring and features that were working before will stop working: suddenly you are infested with regression bugs everywhere.

To get everything back on track again, the team will have to come to a halt in producing new features (stop adding value) until most bugs are eliminated and the refactoring is complete. When you look back, you spent so many hours refactoring the code to improve its quality but from the customer’s perspective there is no visible change, it looks like a big waste of time. Even worse, people will get afraid of refactoring as it will cause delay, frustration and probably also introduce new bugs.

“Fear leads to anger, anger leads to hate, hate leads to suffering.” - Master Yoda

But worry not, my padawan. 

 

Introducing Test-Driven Development

There is a special technique called “Test-Driven Development” or simply TDD -- it states that for every piece of code you write, you should write a simple test that proves why that code is needed.

Like a ratchet, TDD is a process that helps avoid regressions in software. TDD helps teams move their project forwards with confidence.

A Ratchet Diagram - TDD is like a ratchet, preventing regressions.

The benefits include

a) Less bugs - the software will be fully tested as often as you want
b) Better scope - programmers will work on small pieces of code at a time
c) Reduced technical debt - code is constantly refactored and improved

But the main benefit, in my opinion, is: velocity.

Test-driven developers will progress slower than fast-and-furious developers, but after a few cycles are complete, their velocity is similar. Without TDD you may be creating patches much faster than TDD and close a lot more tickets; however, with TDD you will not need to close as many tickets because, well, there are simply less bugs to fix.

Image credits

 

05 July 2018

Catalyst Blog

face

Haere ki te wharepukapuka o Toi Ohomai

 

by Kathryn Tyree

The longer you stay at the Toi Ohomai library, the more you realise how special it is. You walk into the light, bright space and no one stares at you, it’s not too quiet, just welcoming with a gentle
busy hum. 

I love Polytech libraries, everything in there is so useful. But of course, it’s a vocational learning institution. Agriculture, Animal Care, Architecture and Land Surveying, Art... I want to go back
to study 10 x over. Their website says “your success is key to the success of our whole region”. I want to stay. 

photograph of about 30 small home grown lemons in a wicker basket at the 'share homegrown or excess food' spot at Toi Ohomai Library

Lucky for me and the rest of the Koha team at Catalyst, we got to come back a few times while we worked with the Toi Ohomai library on implementing Koha Library Management System. 

“We have chosen Koha because open source software is synonymous with the philosophy of librarians who believe that information should be freely accessible to everyone and that libraries are about sharing resources and creating strong communities.” ~ Toi Ohomai library

It is such a pleasure to work with this library. Everything about what they do is about openness, sharing knowledge and making space for people to learn and succeed. The Library Manager Lee Rowe leads a team who express these values in all their work – no wonder I feel so comfortable here. 

The library is committed to building relationships with their Māori learners and wider community, and have added new features to extend the Te Reo Māori translation for Koha. It’s something we have hoped to see from academic libraries in New Zealand for a long time.

However, Toi Ohomai haven’t just chosen an open source system because it suits their values – that wouldn’t get past IT procurement. The system they have implemented is world class. So we have Koha, making use of the EBSCO Discovery System Koha plugin, to provide one search for all physical and electronic library resources. It’s fully web-based and interoperable with all the other systems Toi Ohomai runs. It’s also the same set up just announced by Virginia Tech an R1 University Research Library in the USA. I’ll have to write another post sometime to cover more detail about what the Toi Ohomai Library have achieved in their implementation, or you can check it out for yourself at https://toiohomai.mykoha.co.nz

Now Toi Ohomai are celebrating their go live with Koha, completing their part in the systems merge for their institution. The Bay of Plenty and Waiariki polytechnics have merged and are gifted their new name Toi Ohomai, meaning to achieve great heights; to be awakened by learning.

While most of our team are in Wellington, we have felt so included in the celebrations, with big boxes of donuts arriving in time to enjoy while the library has their official celebrations in Tauranga and Rotorua. 

We hope you will enjoy a piece of the celebrations in the photos. Go and visit this library sometime.

photo of a poster with fireworks in red, green and blue, with 'Koha is here! let's celebrate' text

Photo of mini chocolate cupcakes with green koha logo shaped icingPhoto of a Toi Ohomai staff member wearing a bright green cape with a large koha logo on it

Two Toi Ohomai librarians - an man and woman - smiling and doing thumbs up with their computer now running the new koha system

Photo of a poster at Toi Ohomai library with the timetable of Koha system training sessions available on go-live day 5 July 2018 through AugustPhoto of the library's patron catalogue computer, with the new Koha library management system up and running

a large booth display set up at the Toi Ohomai Library with green balloons and bunting, with signs about the new Koha LMS system

Kathryn Tyree of Catalyst IT in the Wellington Koha office, holding boxes of donuts that were gifted from Toi Ohomai on go-live day

3 boxes of donuts, two open to show chocolate nut icing and pink sprinkle icing, and some donut holes - looks delicious!

 

 

 

 

 

 

 

 

 

 

 

More:

https://toiohomai.ac.nz
https://toiohomai.mykoha.co.nz 
https://www.catalyst.net.nz/koha
https://koha-community.org

03 July 2018

Catalyst Blog

face

Koha tips and tricks 2

by Alex Buckley, Koha Junior Developer

Koha is a Library Management System (LMS) used worldwide by approximately 15,000 libraries. The Koha team at Catalyst are passionate about using Koha and helping libraries (big and small!) get the most out of their Koha LMS.

This is the second blog in the Koha tips and tricks series to help you make the most out of your Koha instance. Read the first post here: Koha tips and tricks 

 

1. How do I write a SQL report using parameters?

Koha’s reporting module (accessible from the staff client interface) allows you to retrieve a wide range of detailed information from your Koha instance’s database. The database contains the bibliographic, circulation, acquisition and patron data of your Koha site.

Reports can either be created using a GUI (Graphical User Interface) form which does not require you to write SQL (Structured Query Language) code to query the database.

The second method is a SQL report. The Koha community wiki site has a page of pre-written SQL reports which you can use here

If you plan to write a custom SQL query then here are some basics of SQL which you’ll need to know:

  • Koha reports only retrieve data from the database, they do not modify or delete existing data or insert new records. Therefore you will only be writing SQL SELECT queries.
  • To narrow down the number of results in your SQL query you will need to use a WHERE clause in your SELECT statement. This means only records matching a specific condition in the WHERE clause will be returned.
  • To specify the condition to filter on in the WHERE clause you need to specify parameters. What does this mean? Well, when you run your report an input field will be displayed for you to enter a value, this value will be used to filter the report results.

In the SQL of the report, the parameters must be written in a specific format which is <<Text to be displayed by the input field|authorised value>>

For example: SELECT * FROM biblio left join items on biblio.biblionumber=items.biblionumber WHERE biblio.frameworkcode=<<Enter the framework you want to retrieve bib records and items for>>

This simple report will return all bibliographic records and their associated items which have a framework value in the biblio database table which match the entered framework.

The parameter text contained inside the << >> characters is what displayed when running the report.

In the below screenshot a parameter of ‘ACQ’ is entered, this will be substituted for <<Enter the framework you want to retrieve bib records and items for>> and any biblios with a ‘frameworkcode’ value matching ‘ACQ’ will be returned along with their associated items.

Title of Enter parameters for test report, showing box labelled Enter the framework you want to retrieve bib records and items for, input box has 'ACQ' entered.  

As of the most recent major Koha release (Koha 18.05) using parameters in SQL reports got easier, now parameters can be re-used in reports.

This means if you want to use the same parameter multiple times in a SQL report instead of having multiple input fields displayed where you have to write in the same value, only one input field for the parameter is displayed.

 

2. How to merge bibliographic records?

Human error can happen in a library. One such error is the creation of multiple records for the same bibliographic item.

These multiple bibliographic records can easily end up containing different data leaving an issue of having to transport the unique, valuable data from one of the records to the other when trying to remove one of the duplicates. Well, with Koha this annoying task is solved by through the Cataloging modules ability to merge records. This consolidates data differences between the records into a single record.

Here are the steps to merge records in Koha:

1. In the Koha staff client go to ‘Cataloging’.

2. Perform a cataloguing search, by writing a search query term in the search box at the top of this page with the ‘Cataloging search’ option highlighted underneath. As you can see in the below screenshot. An effective search term to use is the title of the bibliographic record then the duplicate records will be returned as search results. 

Cataloguing search bar with text 'Grandfather's journey' entered ready to push 'submit' to search

3. Select the check boxes beside the duplicate records in the search results and select the ‘Merge selected’ button.

two catalogue record results for 'Grandfather's journey'. each had a box to the left that has been 'ticked' to choose these are to be merged into one biblio record.

4. Select which of the selected records you want to be kept and select ‘Next’. This selected record is referred to as the ‘Reference’. The other record(s) which aren’t selected will be deleted from the biblio and biblio_metadata database tables, but their unique data will not be lost, it will simply be added to the reference record.

Image called 'merging records', with the records listed to choose which will be the chosen reference after merging is complete. in this case there are two options to choose, with top option selected. 'next' button shown to proceed

5. Merging the records to a single reference record is a good chance to clear out unwanted MARC field and subfield values from the two (or more) source records you are merging as you can decide exactly what data is consolidated into the merged record. Simply select (to keep) or unselect (to remove) data from the two records. By default, all data in the reference record is selected. Be sure to select ‘Merge’ button at the bottom of this page.

image titled 'Merging Records' showing source records listed on left - each unmerged reference has a tab and you can flip through tabs to tick box choose which data will represent merge. And destination records listed on right show the chosen data to represent this merged title

6. The final screen in the Merge record workflow shows you the outcome of the merging. In the case of the screenshot below the merge action was successful. The reference record (biblionumber 1) was kept and contains the selected data (selected in step 5) from the deleted record (biblionumber 40).

Image titled 'Merging Records' text saying merge was successful with a link to merged record. also a small report showing the number of the record that was kept and below any other records that have had data pulled from it for the fiinal

Stay tuned for more posts in Alex's 'Koha tips and tricks' series! 

 

Catalyst Koha

If you have any questions or comments about this blog post, or would like some support with your Koha instance you are welcome to email us at koha@catalyst.net.nz

Follow Catalyst Koha on Twitter

07 June 2018

Catalyst News

ShadowTech Day: A day in the world of an IT professional

ShadowTech Day: Opportunity for young women to participate in a day with a woman in IT.

ShadowTech logo

01 June 2018

Catalyst News

Training during June and July

Did you know that Catalyst offers training at our offices in Wellington, Auckland and Christchurch.  Our Catalyst AU office also does training in Sydney and Melbourne.  If you have teams spread around the country, we can offer the same courses in the different locations. 

09 April 2018

Kristina Hoeppner

face

Mahara 18.04: New privacy features

Last Friday, 6 April 2018, we, the Mahara core team at Catalyst, released Mahara 18.04. It was half a year of intense work especially getting the GDPR features in to help institutions in their compliance with that new EU regulation.

The GDPR is also the reason for the early release of Mahara 18.04. Typically, we release towards the end of the month. Since we know that many institutions need to upgrade before 25 May 2018, we made sure to release as soon as possible to give everyone a bit more time to upgrade.

It was a pleasure to work on Mahara 18.04. There are many other new features in this release, and it’s been fantastic to see one of our part-time students having contributed a lot of bug fixes and also some new features that had been on our wishlist for a very long time.

Here’s the video I made to introduce a number of the new features.

Silence

Empty chairs at a table
unsplash-logoSabri Tuzcu

It’s been a wee bit quite over the last 1.5 years here on my blog. I’m going to resurrect it again this year because it does help to keep things in one place.

Let’s start off with the past (the empty seats) and fill them up as time goes by.

17 September 2017

Andrew Ruthven

Missing opkg status file on LEDE...

I tried to install LEDE on my home router which is running LEDE, only to be told that libc wasn't installed. Huh? What's going on?! It looked to all intents as purposes as though libc wasn't installed. And it looked like nothing was installed.

What to do if opkg list-installed is returning nothing?

I finally tracked down the status file it uses as being /usr/lib/opkg/status. And it was empty. Oh dear.

Fortunately the info directory had content. This means we can rebuild the status file. How? This is what I did:

cd /usr/lib/opkg/info
for x in *.list; do
pkg=$(basename $x .list)
echo $pkg
opkg info $pkg | sed 's/Status: .*$/Status: install ok installed/' >> ../status
done

And then for the special or virtual packages (such as libc and the kernel):

for x in *.control; do
pkg=$(basename $x .control)
if ! grep -q "Package: $pkg" ../status
then
echo $pkg is missing; cat $x >> ../status
fi
done

I then had to edit the file tidy up some newlines for the kernel and libc, and set the status lines correctly. I used "install hold installed".

Now I that I've shaved that yak, I can install tcpdump to try and work out why a VoIP phone isn't working. Joy.

02 September 2017

Andrew Ruthven

Network boot a Raspberry Pi 3

I found to make all this work I had to piece together a bunch of information from different locations. This fills in some of the blanks from the official Raspberry Pi documentation. See here, here, and here.

Image

Download the latest raspbian image from https://www.raspberrypi.org/downloads/raspbian/ and unzip it. I used the lite version as I'll install only what I need later.

To extract the files from the image we need to jump through some hoops. Inside the image are two partitions, we need data from each one.

 # Make it easier to re-use these instructions by using a variable
 IMG=2017-04-10-raspbian-jessie-lite.img
 fdisk -l $IMG

You should see some output like:

 Disk 2017-04-10-raspbian-jessie-lite.img: 1.2 GiB, 1297862656 bytes, 2534888 sectors
 Units: sectors of 1 * 512 = 512 bytes
 Sector size (logical/physical): 512 bytes / 512 bytes
 I/O size (minimum/optimal): 512 bytes / 512 bytes
 Disklabel type: dos
 Disk identifier: 0x84fa8189
 
 Device                               Boot Start     End Sectors  Size Id Type
 2017-04-10-raspbian-jessie-lite.img1       8192   92159   83968   41M  c W95 FAT32 (LBA)
 2017-04-10-raspbian-jessie-lite.img2      92160 2534887 2442728  1.2G 83 Linux

You need to be able to mount both the boot and the root partitions. Do this by tracking the offset of each one and multiplying it by the sector size, which is given on the line saying "Sector size" (typically 512 bytes), for example with the 2017-04-01 image, boot has an offset of 8192, so I mount it like this (it is VFAT):

 mount -v -o offset=$((8192 * 512)) -t vfat $IMG /mnt
 # I then copy the data off:
 mkdir -p /data/diskless/raspbian-lite-base-boot/
 rsync -xa /mnt/ /data/diskless/raspbian-lite-base-boot/
 # unmount the partition now:
 umount /mnt

Then we do the same for the root partition:

 mount -v -o offset=$((92160 * 512)) -t ext4 $IMG /mnt
 # copy the data off:
 mkdir -p /data/diskless/raspbian-lite-base-root/
 rsync -xa /mnt/ /data/diskless/raspbian-lite-base-root/
 # umount the partition now:
 umount /mnt

DHCP

When I first set this up, I used OpenWRT on my router, and I had to patch /etc/init/dnsmasq to support setting DHCP option 43. As of the writting of this article, a similar patch has been merged, but isn't in a release yet, and, well, there may never be another release of OpenWRT. I'm now running LEDE, and the the good news is it already has the patch merged (hurrah!). If you're still on OpenWRT, then here's the patch you'll need:

https://git.lede-project.org/?p=source.git;a=commit;h=9412fc294995ae2543fabf84d2ce39a80bfb3bd6

This lets you put the following in /etc/config/dnsmasq, this says that any device that uses DHCP and has a MAC issued by the Raspberry PI Foundation, should have option 66 (boot server) and option 43 set as specified. Set the IP address on option 66 to the device that should be used for tftp on your network, if it's the same device that provides DHCP then it isn't required. I had to set the boot server, as my other network boot devices are using a different server (with an older tftpd-hpa, I explain the problem further down).

 config mac 'rasperrypi'
         option mac 'b8:27:eb:*:*:*'
         option networkid 'rasperrypi'
         list dhcp_option '66,10.1.0.253'
         list dhcp_option '43,Raspberry Pi Boot'

tftp

Initially I used a version of tftpd that was too old and didn't support how the RPi tried to discover if it should use the serial number based naming scheme. The version of tftpd-hpa Debian Jessie works just fine. To find out the serial number you'll probably need to increase the logging of tftpd-hpa, do so by editing /etc/default/tftpd-hpa and adding "-v" to the TFTP_OPTIONS option. It can also be useful to watch tcpdump to see the requests and responses, for example (10.1.0.203 is the IP of the RPi I'm working with):

  tcpdump -n -i eth0 host 10.1.0.203 and dst port 69

This was able to tell me the serial number of my RPi, so I made a directory in my tftpboot directory with the same serial number and copied all the boot files into there. I then found that I had to remove the init= portion from the cmdline.txt file I'm using. To ease debugging I also removed quiet. So, my current cmdline.txt contains (newlines entered for clarity, but the file has it all on one line):

idwc_otg.lpm_enable=0 console=serial0,115200 console=tty1 root=/dev/nfs
nfsroot=10.1.0.253:/data/diskless/raspbian-lite-base-root,vers=3,rsize=1462,wsize=1462
ip=dhcp elevator=deadline rootwait hostname=rpi.etc.gen.nz

NFS root

You'll need to export the directories you created via NFS. My exports file has these lines:

/data/diskless/raspbian-lite-base-root	10.1.0.0/24(rw,no_root_squash,sync,no_subtree_check)
/data/diskless/raspbian-lite-base-boot	10.1.0.0/24(rw,no_root_squash,sync,no_subtree_check)

And you'll also want to make sure you're mounting those correctly during boot, so I have in /data/diskless/raspbian-lite-base-root/etc/fstab the following lines:

10.1.0.253:/data/diskless/raspbian-lite-base-root   /       nfs   rw,vers=3       0   0
10.1.0.253:/data/diskless/raspbian-lite-base-boot   /boot   nfs   vers=3,nolock   0   2

Network Booting

Now you can hopefully boot. Unless you into this bug, as I did. Where the RPi will sometimes fail to boot. Turns out the fix, which is mentioned on the bug report, is to put bootcode.bin (and only bootcode.bin) onto an SD card. That'll then load the fixed bootcode, and which will then boot reliably.

21 October 2016

Kristina Hoeppner

face

Getting the hang of hanging out (part 2)

A couple of days ago I experienced some some difficulties using YouTube Live Events. So today, I was all prepared:

  • Had my phone with me for 2-factor auth so I could log into my account on a second computer in order to paste links into the chat;
  • Prepared a document with all the links I wanted to paste;
  • Had the Hangout on my presenter computer running well ahead of time.

Indeed, I was done with my prep so much in advance that I had heaps of time and thus wanted to pause the broadcast as it looked like it was not actually broadcasting since I couldn’t see anything on the screen. So I thought I needed to adjust the broadcast’s start time.

Hence why I stopped the broadcast and as soon as I hit the button I knew I shouldn’t have. Stopping the broadcast doesn’t pause it, but stops it and kicks off the publishing process.

Yep, I panicked. I had about 10 minutes to go to my session and nobody could actually join it. Scrambling for a solution, I quickly set up another live event, tweeted the link and also sent it out to the Google+ group.

Then I changed the title of the just ended broadcast to something along the lines of “Go to description for new link”, put the link to the new stream into the description field and also in the chat as I had no other way of letting people know where I had gone and how they could join me.

I was so relieved when people showed up in the new event. That’s when the panic subsided, and I still had about 3 minutes to spare to the start of the session.

The good news? We released Mahara 16.10 and Mahara Mobile today (though actually, we soft-launched the app on the Google Play store already yesterday to ensure that it was live for today).

24 July 2016

Andrew Ruthven

Allow forwarding from VoiceMail to cellphones

Something I've been wanting to do with our Asterisk PBX at Catalyst for a while is to allow having callers that hit VoiceMail to be forwarded the callee's cellphone if allowed. As part of an Asterisk migration we're currently carrying out I finally decided to investigate what is involved. One of the nice things about the VoiceMail application in Asterisk is that callers can hit 0 for the operator, or * for some other purpose. I decided to use * for this purpose.

I'm going to assume a working knowledge of Asterisk dial plans, and I'm not going to try and explain how it works. Sorry.

When a caller hits * the VoiceMail application exits and looks for a rule that matches a. Now, the simple approach looks like this within our macro for handling standard extensions:

[macro-stdexten]
...
exten => a,1,Goto(pstn,027xxx,1)
...

(Where I have a context called pstn for placing calls out to the PSTN).

This'll work, but anyone who hits * will be forwarded to my cellphone. Not what I want. Instead we need to get the dialled extension into a place where we can perform extension matching on it. So instead we'll have this (the extension is passed into macro-stdexten as the first variable - ARG1):

[macro-stdexten]
...
exten => a,1,Goto(vmfwd,${ARG1},1)
...

Then we can create a new context called vmfwd with extension matching (my extension is 7231):

[vmfwd]
exten => 7231,1,Goto(pstn,027xxx,1)

I actually have a bit more in there to do some logging and set the caller ID to something our SIP provider will accept, but you get the gist of it. All I need to do is to arrange for a rule per extension that is allowed to have their VoiceMail callers be forwarded to voicemail. Fortunately I have that part automated.

The only catch is for extensions that aren't allowed to be forwarded to a cellphone. If someone calling their VoiceMail hits * their call will be hung up and I get nasty log messages about no rule for them. How do we handle them? Well, we send them back to VoiceMail. In the vmfwd context we add a rule like this:

exten => _XXXX,1,VoiceMail(${EXTEN}@sip,${voicemail_option})
  same => n,Hangup

So any extension that isn't otherwise matched hits this rule. We use ${voicemail_option} so that we can use the same mode as was used previously.

Easy! Naturally this approach won't work for other people trying to do this, but given I couldn't find write ups on how to do this, I thought it be might be useful to others.

Here's my macro-stdexten and vmfwd in full:

[macro-stdexten]
exten => s,1,Progress()
exten => s,n,Dial(${ARG2},20)
exten => s,n,Goto(s-${DIALSTATUS},1)
exten => s-NOANSWER,1,Answer
exten => s-NOANSWER,n,Wait(1)
exten => s-NOANSWER,n,Set(voicemail_option=u)
exten => s-NOANSWER,n,Voicemail(${ARG1}@sip,u)
exten => s-NOANSWER,n,Hangup
exten => s-BUSY,1,Answer
exten => s-BUSY,n,Wait(1)
exten => s-BUSY,n,Set(voicemail_option=b)
exten => s-BUSY,n,Voicemail(${ARG1}@sip,b)
exten => s-BUSY,n,Hangup
exten => _s-.,1,Goto(s-NOANSWER,1)
exten => a,1,Goto(vmfwd,${ARG1},1)
exten => o,1,Macro(operator)

[vmfwd]

exten => _XXXX,1,VoiceMail(${EXTEN}@sip,${voicemail_option})
  same => n,Hangup

#include extensions-vmfwd-auto.conf

And I then build extensions-vmfwd-auto.conf from a script that is used to generate configuration files for defining accounts, other dial plan rule entries and phone provisioning files.

With thanks to John Kiniston for the suggestion about the wildcard entry in vmfwd.

25 August 2014

Dan Marsden

SCORM hot topics.

As a follow up from the GSOC post I thought it might be useful to mention a few things happening with SCORM at the moment.

There are currently approx 71 open issues related to SCORM in the Moodle tracker at the moment, of those 38 are classed as bugs/issues I should fix in stable branches at some point, 33 are issues that are really feature/improvement requests.

Issues about to be fixed and under development
MDL-46639 – External AICC packages not working correctly.
MDL-44548 – SCORM Repository auto-update not working.

Issues that are  high in my list of things to look at and I hope to look at sometime soon.
MDL-46961 – SCORM player not launching in Firefox when new window being used.
MDL-46782 – Re-entry of a scorm not using suspend_data or resuming itself should allow returning to the first sco that is not complete.
MDL-45949 – The TOC Tree isn’t quite working as it should after our conversion to YUI3 – it isn’t expanding/collapsing in a logical manner – could be a bit of work here to make this work in the right way.

Issues recently fixed in stable releases.
MDL-46940 – new window option not working when preview mode disabled.
MDL-46236 – Start new attempt option ignored if new window used.
MDL-45726 – incorrect handling of review mode.

New improvements you might not have noticed in 2.8 (not released yet)
MDL-35870 -Performance improvements to SCORM
MDL-37401 -SCORM auto-commit – allows Moodle to save data periodically even if the SCORM doesn’t call “commit”

New improvements you might not have noticed in 2.7:
MDL-28261 -Check for live internet connectivity while using SCORM – warns user if SCORM is unable to communicate with the LMS.
MDL-41476 – The SCORM spec defines a small amount of data that can be stored when using SCORM 1.2 packages, we have added a setting that allows you to disable this restriction within Moodle to allow larger amounts of data to be stored (you may need to modify your SCORM package to send more data to make this work.)

Thanks to Ian Wild, Martin Holden, Tony O’Neill, Peter Bowen, André Mendes, Matteo Scaramuccia, Ray Morris, Vignesh, Hansen Ler, Faisal Kaleem and many other people who have helped report/test and suggest fixes related to SCORM recently including the Moodle HQ Integration team (Eloy, Sam, Marina, Dan, Damyon, Rajesh) who have all been on the receiving end of reviewing some SCORM patches recently!

GSOC 2014 update

Another year of GSOC has just finished and Vignesh has done a great job helping us to improve a number of areas of SCORM!
I’m really glad to finally have some changes made to the JavaScript datamodel files as part of MDL-35870 – I’m hoping this will improve the performance of the SCORM player as the JavaScript can now be cached properly by the users browser rather than dynamically generating it using PHP.

Vignesh has made a number of general bug fixes to the SCORM code and has also tidied up the code in the 2.8 branch so that it now complies with Moodle’s coding guidelines.

These changes have involved almost every single file in the SCORM module and significant architectural changes have been made. We’ve done our best to avoid regresssions (thanks Ray for testing SCORM 2004) but due to the large number of changes (and the fact that we only have 1 behat test for SCORM) It would be really great if people could test the 2.8 branch with their SCORM content before release so we can pick up any other regressions that may have occurred.

Thanks heaps to Vignesh for his hard work on SCORM during GSOC – and kudos to Google for running a great program and providing the funding to help it happen!

10 July 2014

Dan Marsden

Goodbye Turnitin…

Time to say goodbye to the “Dan Marsden Turnitin plugin”… well almost!

Turnitin have done a pretty good job of developing a new plugin to replace the code that I have been working on since Moodle 1.5!

The new version of their plugin contains 3 components:

  1. A module (called turnitintool2) which contains the majority of the code for connecting to their new API and is a self-contained activity like their old “turnitintool” plugin
  2. A replacement plugin for mine (plagiarism_turnitin) which allows you to use plagiarism features within the existing Moodle Assignment, Workshop and forum modules.
  3. A new Moodle block that works with both the above plugins.

The Moodle.org Plugins database entry has been updated to replace my old code with the latest version from Turnitin, we have a number of clients at Catalyst using the new plugin and the migration has mostly gone ok so far – there are a few minor differences between my plugin and the new version from Turnitin so I encourage everyone to test the upgrade to the new version before running it on their production sites.

I’m encouraging most of our clients to update to the new plugin at the end of this year but I will continue to provide basic support for my version running on all Moodle versions up to Moodle 2.7 and my code continues to be available from my github repository here:
https://github.com/danmarsden/moodle-plagiarism_turnitin

Thanks to everyone who has helped in the past with the plugin I wrote – hopefully this new version from Turnitin will meet everyone’s needs!

31 October 2012

Chris Cormack

Signoff statistics for October 2012

Here are the signoff statistics for bugs in October 2012
  • Kyle M Hall- 24
  • Owen Leonard- 18
  • Chris Cormack- 15
  • Nicole C. Engard- 10
  • Mirko Tietgen- 9
  • Marc Véron- 6
  • Frédéric Demians- 5
  • Jared Camins-Esakov- 5
  • Magnus Enger- 4
  • Jonathan Druart- 4
  • M. de Rooy- 3
  • Melia Meggs- 3
  • wajasu- 2
  • Paul Poulain- 2
  • Fridolyn SOMERS- 2
  • Tomás Cohen Arazi- 2
  • Matthias Meusburger- 1
  • Katrin Fischer- 1
  • Julian Maurice- 1
  • Koha Team Lyon 3- 1
  • Mason James- 1
  • Elliott Davis- 1
  • mathieu saby- 1
  • Robin Sheat- 1

16 October 2012

Chris Cormack

Unsung heroes of Koha 26 – The Ada Lovelace Day Edition

Darla Grediagin

Darla has been using Koha from 2006, for the Bering Strait School District in Alaska. This is pretty neat in itself, what is cooler is that as far as I know, they have never had a ‘Support Contract’. Doing things either by themselves or with the help of IT personnel as needed. One of Darla’s first blogposts that I read was about her struggles trying to install Debian on an Emac. I totally respect anyone who is trying to reclaim hardware from the darkside 🙂

Darla has presented on Koha at conferences, and maintains a blog that has useful information, including sections of what she would do differently. As well as some nice feel good bits like this, from April 2007

I know I had an entry titled this before, but I do love OSS programs.   Yesterday I mentioned that I would look at Pines because I like the tool it has to merge MARC records.  Today a Koha developer emailed me to let me know that he is working on this for Koha and it should be available soon.  I can’t imagine getting that kind of service from a vendor.

Hopefully she will be able to make it Kohacon13 in Reno, NV. It would be great to put a face to the email address 🙂

10 October 2012

Chris Cormack

New Release team for Koha 3.12

Last night on IRC the Koha Community elected a new release team, for the 3.12 release. Once again it is a nicely mixed team, there are 16 people involved, from  8 different countries (India, New Zealand, USA, Norway, Germany, France, Netherlands, Switzerland) and four of the 12 roles are filled by women.

The release team will be working super hard to bring you the best release of Koha yet, and you can help:

  • Reporting bugs
  • Testing bug fixes
  • Writing up enhancement requests
  • Using Koha
  • Sending cookies
  • Inventing time travel
  • Killing MARC
  • Winning the lottery and donating the proceeds to the trust to use for Koha work.

24 July 2012

Pass the Source

Google Recruiting

So, Google are recruiting again. From the open source community, obviously. It’s where to find all the good developers.

Here’s the suggestion I made on how they can really get in front of FOSS developers:

Hi [name]

Just a quick note to thank you for getting in touch of so many our
Catalyst IT staff, both here and in Australia, with job offers. It comes
across as a real compliment to our company that the folks that work here
are considered worthy of Google’s attention.

One thing about most of our staff is that they *love* open source. Can I
suggest, therefore, that one of the best ways for Google to demonstrate
its commitment to FOSS and FOSS developers this year would be to be a
sponsor of the NZ Open Source Awards. These have been very successful at
celebrating and recognising the achievements of FOSS developers,
projects and users. This year there is even an “Open Science” category.

Google has been a past sponsor of the event and it would be good to see
you commit to it again.

For more information see:

http://www.nzosa.org.nz/

Many thanks
Don

09 July 2012

Andrew Caudwell

Inventing On Principle Applied to Shader Editing

Recently I have been playing around with GLSL Sanbox (github), a what-you-see-is-what-you-get shader editor that runs in any WebGL capable browser (such as Firefox, Chrome and Safari). It gives you a transparent editor pane in the foreground and the resulting compiled fragment shader rendered behind it. Code is recompiled dynamically as the code changes. The latest version even has syntax and error highlighting, even bracket matching.

There have been a few other Webgl based shader editors like this in the past such as Shader Toy by Iñigo Quílez (aka IQ of Demo Scene group RGBA) and his more recent (though I believe unpublished) editor used in his fascinating live coding videos.

Finished compositions are published to a gallery with the source code attached, and can be ‘forked’ to create additional works. Generally the author will leave their twitter account name in the source code.

I have been trying to get to grips with some more advanced raycasting concepts, and being able to code something up in sandbox and see the effect of every change is immensely useful.

Below are a bunch of my GLSL sandbox creations (batman symbol added by @emackey):

    

    

GLSL Sandbox is just the latest example of the merit of software development tools that provide immediate feedback, and highlights the major advantages of scripting languages have over heavy compiled languages with long build and linking times that make experimentation costly and tedious. Inventing on Principle, a presentation by Bret Victor, is a great introduction to this topic.

I would really like a save draft button that saves shaders locally so I have some place to save things that are a work in progress, I might have to look at how I can add this.

Update: Fixed links to point at glslsandbox.com.

05 June 2012

Pass the Source

Wellington City Council Verbal Submission

I made the following submission on the Council’s Draft Long Term Plan. Some of this related to FLOSS. This was a 3 minute slot with 2 minutes for questions from the councillors.

Introduction

I have been a Wellington inhabitant for 22 years and am a business owner. We employ about 140 staff in Wellington, with offices in Christchurch, Sydney, Brisbane and the UK. I am also co-chair of NZRise which represents NZ owned IT businesses.

I have 3 Points to make in 3 minutes.

1. The Long Term plan lacks vision and is a plan for stagnation and erosion

It focuses on selling assets, such as community halls and council operations and postponing investments. On reducing public services such as libraries and museums and increasing user costs. This will not create a city where “talent wants to live”. With this plan who would have thought the citizens of the city had elected a Green Mayor?

Money speaks louder than words. Both borrowing levels and proposed rate increases are minimal and show a lack of investment in the city, its inhabitants and our future.

My company is about to open an office in Auckland. A manager was recently surveying staff about team allocation and noted, as an aside, that between 10 and 20 Wellington staff would move to Auckland given the opportunity. We are not simply competing with Australia for hearts and minds, we are competing with Auckland whose plans for investment are much higher than our own.

2. Show faith in local companies

The best way to encourage economic growth is to show faith in the talent that actually lives here and pays your rates. This means making sure the council staff have a strong direction and mandate to procure locally. In particular the procurement process needs to be overhauled to make sure it does not exclude SME’s (our backbone) from bidding for work (see this NZCS story). It needs to be streamlined, transparent and efficient.

A way of achieving local company participation in this is through disaggregation – the breaking up large-scale initiatives into smaller, more manageable components. For the following reasons:

  • It improves project success rates, which helps the public sector be more effective.
  • It reduces project cost, which benefits the taxpayers.
  • It invites small business, which stimulates the economy.

3. Smart cities are open source cities

Use open source software as the default.

It has been clear for a long time that open source software is the most cost effective way to deliver IT services. It works for Amazon, Facebook, Red Hat and Google and just about every major Silicon Valley success since the advent of the internet. Open source drives the internet and these companies because it has an infinitely scalable licensing and model – free. Studies, such as the one I have here from the London School of Economics, show the cost effectiveness and innovation that comes with open source.

It pains me to hear about proposals to save money by reducing libraries hours and increasing fees, when the amount of money being saved is less than the annual software licence fees our libraries pay, when world beating free alternatives exist.

This has to change, looking round the globe it is the visionary and successful local councils that are mandating the use of FLOSS, from Munich to Vancouver to Raleigh NC to Paris to San Francisco.

As well as saving money, open source brings a state of mind. That is:

  • Willingness to share and collaborate
  • Willingness to receive information
  • The right attitude to be innovative, creative, and try new things

Thank you. There should now be 2 minutes left for questions.

05 January 2012

Pass the Source

The Real Tablet Wars

tl;dr formally known as Executive Summary, Openness + Good Taste Wins

Gosh, it’s been a while. But this site is not dead. Just been distracted by indenti.ca and twitter.

I was going to write about Apple, again. A result of unexpected and unwelcome exposure to an iPad over the Christmas Holidays. But then I read Jethro Carr’s excellent post where he describes trying to build the Android OS from Google’s open source code base. He quite mercilessly exposes the lack of “open” in some key areas of that platform.

It is more useful to look at the topic as an issue of “open” vs “closed” where iPad is one example of the latter. But, increasingly, Android platforms are beginning to display similar inane closed attributes – to the disadvantage of users.

Part of my summer break was spent helping out at the premier junior sailing regatta in the world, this year held in Napier, NZ. Catalyst, as a sponsor, has built and is hosting the official website.

I had expected to swan around, sunbathing, drinking cocktails and soaking up some atmosphere. Instead a last minute request for a new “live” blogging section had me blundering around Joomla and all sorts of other technology with which I am happily unfamiliar. Days and nightmares of iPads, Windows, wireless hotspots and offshore GSM coverage.

The plan was simple, the specialist blogger, himself a world renown sailor, would take his tablet device out on the water on the spectator boat. From there he would watch and blog starts, racing, finishes and anguished reactions from parents (if there is one thing that unites races and nationalities, it is parental anguish over sporting achievement).

We had a problem in that the web browser on the tablet didn’t work with the web based text editor used in the Joomla CMS. That had me scurrying around for a replacement to the tinyMCE plugin, just the most common browser based editing tool. But a quick scan around various forums showed me that the alternative editors were not a solution and that the real issue was a bug with the client browser.

“No problem”, I thought. “Let’s install Firefox, I know that works”.

But no, Firefox is not available to iPad users  and Apple likes to “protect” its users by only tightly controlling whose applications are allowed to run on the tablet. Ok, what about Chrome? Same deal. You *have* to use Apple’s own buggy browser, it’s for your own good.

Someone suggested that the iPad’s operating system we were using needed upgrading and the new version might have a fixed browser. No, we couldn’t do that because we didn’t have Apple’s music playing software, iTunes, on a PC. Fortunately Vodafone were also a sponsor and not only did they download an upgrade they had iTunes handy. Only problem, the upgrade wiped all the apps that our blogger and his family had previously bought and installed.

Er, and the upgrade failed to fix the problem. One day gone.

So a laptop was press ganged into action, which, in the end was a blessing because other trials later showed that typing blogs fast, on an ocean swell, is very hard without a real keyboard. All those people pushing tablets at schools, keep in mind it is good to have our children *write* stuff, often.

The point of this post is not really to bag Apple, but to bag the mentality that stops people using their own devices in ways that help them through the day. I only wanted to try a different browser to Safari, not an unusual thing to do. Someone else might want to try out a useful little application a friend has written for them, but that wouldn’t be allowed.

But the worst aspect of this is that because of Apple’s success in creating well designed gadgets other companies have decided that “closed” is also the correct approach to take with their products. This is crazy. It was an open platform, Linux Kernel with Android, that allowed them to compete with Apple in the first place and there is no doubt that when given a choice, choice is what people want – assuming “taste” requirements are met.

Other things being equal*, who is going to chose a platform where the company that sold you a neat little gadget controls all the things you do on it? But there is a strong trend by manufacturers such as Samsung, and even Linux distributions, such asUbuntu, to start placing restrictions on their clients and users. To decide for all of us how we should behave and operate *our* equipment.

The explosive success of the personal computer was that it was *personal*. It was your own productivity, life enhancing device. And the explosive success of DOS and Windows was that, with some notable exceptions, Microsoft didn’t try and stop users installing third party applications. The dance monkey boy video is funny, but the truth is that Microsoft did want “developers, developers, developers, developers” using its platforms because, at the time, it knew it didn’t know everything.

Apple, Android handset manufacturers and even Canonical (Ubuntu) are falling into the trap of not knowing that there is stuff they don’t know and they will probably never know. Similar charges are now being made about Facebook and Twitter. The really useful devices and software will be coming from companies and individuals who realise that whilst most of what we all do is the same as what everyone else does, it is the stuff that we do differently that makes us unique and that we need to control and manage for ourselves. Allow us do that, with taste, and you’ll be a winner.

PS I should also say “thanks” fellow sponsors Chris Devine and Devine Computing for just making stuff work.

* I know all is not equal. Apple’s competitive advantage it “has taste” but not in its restrictions.

18 May 2011

Andrew Caudwell

Show Your True Colours

This last week saw the release of fairly significant update to Gource – replacing the out dated, 3DFX-era rendering code, with something a bit more modern, utilizing more recent OpenGL features like GLSL pixel shaders and VBOs.

A lot of the improvements are under the hood, but the first thing you’ll probably notice is the elimination of banding artifacts in Bloom, the illuminated fog Gource places around directories. This effect is pretty tough on the ‘colour space’ of so called Truecolor, the maximum colour depth on consumer monitors and display devices, which provides 256 different shades of grey to play with.

When you render a gradient across the screen, there are 3 or 4 times more pixels than there are shades of each colour, producing visible ‘bands’ of the same shade. If multiple gradients like this get blended together, as happens with bloom, you simply run out of ‘in between’ colours and the issue becomes more exaggerated, as seen below (contrast adjusted for emphasis):

        

Those aren’t compression artifacts you’re seeing!

Gource now uses colour diffusion to combat this problem. Instead of sampling the exact gradient of bloom for the distance of a pixel from the centre of a directory, we take a fuzzy sample in that vicinity instead. When zoomed in, you can see the picture is now slightly noisy, but the banding is completely eliminated. Viewed at the intended resolution, you can’t really see the trickery going on – in fact the effect even seems somewhat more natural, a bit closer to how light bouncing off particles of mist would actually behave.

        

The other improvement is speed – everything is now drawn with VBOs, large batches of objects geometry passed to the GPU in as few shipments as possible, eliminating CPU and IO bottle necks. Shadows cast by files and users are now done in a second pass on GPU using the same geometry as used for the lit pass – making them really cheap compared to before when we effectively wore the cost of having to draw the whole scene twice.

Text is now drawn in single pass, including shadows, using some fragment shader magic (take two samples of the font texture, offset by 1-by-1 pixels, blend appropriately). Given the ridiculous amount of file, user and directory names Gource draws at once with some projects (Linux Kernel Git import commit, I’m looking at you), doing half as much work there makes a big difference.

06 October 2010

Andrew Caudwell

New Zealand Open Source Awards

I discovered today that Gource is a finalist in the Contributor category for the NZOSA awards. Exciting stuff! A full list of nominations is here.

I’m currently taking a working holiday to make some progress on a short film presentation of Gource for the Onward!.

Update: here’s the video presented at Onward!:

Craig Anslow presented the video on my behalf (thanks again Craig!), and we did a short Q/A over Skype afterwards. The music in the video is Aksjomat przemijania (Axiom of going by) by Dieter Werner. I suggest checking out his other work!