Catalyst IT Limited  
Planet Catalyst
 

14 February 2017

Catalyst Blog

face

JMeter as a Load Testing Tool

by Evan Giles

Catalyst manage a lot of Enterprise web applications, enough that we are always seem to be upgrading one instance or another. More and more in the world of cloud native application stacks, an application upgrade may also come with system architecture changes and improvements. This could be anything from updating the underlying Operating System to a deploying into a new container-based platform.

There is always a cost and risk to change. Sometimes high, sometimes insignificant. And our years of experience in the managed services game means we have developed considerable process and workflow to managing change.

In the case of application development, this means we include steps like unit tests in our application code, as well as getting our Quality Assurance team to test the application in a non-production environment.

In the case of some of our larger Enterprise Moodle LMS instances, our clients have very high performance requirements and a persistent and impatient student body. Some of our larger sites have thousands of concurrent user sessions and the LMS is expected to maintain a sub-second page build time.

Without a realistic automated load testing strategy Catalyst would not be able to confidently roll out major changes to our Enterprise Moodle customers without risking a degredation in performance. Having the ability to run rounds of load testing also enables us to experiment with new toolsets and architectures, and validate if they yield better performance outcomes.

We have used a number of tools, but more and more we have settled on Jmeter - an open source pure Java application.

And we recently invested some time in a round of internal development to make it easier for us to launch a round of load testing on our Moodle sites, gather the results and meaningfully compare these results to previous rounds of testing. One clear discovery was that there was too much complexity and manual configuration required in the setup process.

We assessed a number of different potential approaches including:

  • Using the Moodle JMeter integration which generates some test plans automatically.
  • Using the JMeter recording facility, allowing us to “play back” some real user journeys.
  • Tidying up and reusing what we already had.

In the end we reviewed all of the above ideas, and came up with a totally new testing plan. The defining moment came when we realized that a typical JMeter test plan in isolation would not be reusable across our multiple clients, and that we needed think of this as an independent project, which would be stored in a Git repository of its own. Built for the purpose of applying to arbitrary Moodle load testing requirements.

So that's what we have now, a repository which contains:

  • A JMeter test plan (an XML file which tells Jmeter exactly what to do)
  • A list of testing (fake) users that will be provisioned into Moodle and then used to trigger the load testing.
  • A single Moodle course backup. Moodle has a binary archive format of an entire course that allows easy import and export.
  • Instructions on how to turn these assets into an actual performance test for a given Moodle site.

This means that we can now take an arbitrary Moodle site, and simulate a flock of busy users taking complicated activity journeys around that application. Using this, we can give any Moodle site a good workout, tracking performance metrics along the way. And because we can easily repeat this process, we can track changes in performance metrics after application changes.

Now it's much easier for us to do meaningful A/B testing, answering questions like:

  • Will doubling the application server's RAM improve performance?
  • When is the best to trigger cloud auto-scaling?
  • Is this site faster with Apache or Nginx?
  • What different does it make if we … ?

And best of all, because we are storing the entire toolset in a code repository, every time we user this tool and go through this exercise, we can improve the selection of user journeys taken through the site, improving the usefulness of this tool moving forward.

06 February 2017

Catalyst Blog

face

DevOpsDays Sydney 2016

by Zoe Lu
 

Late last year, I attended DevOpsDays Sydney 2016 Conference. This confrence aims to have all participants, learning and sharing ideas of adopting DevOps practice.

The conference was entirely organised by volunteers, from the local technical community and had a very interesting format.

 

What is DevOps?

According to some points of view, DevOps is a specific methodology, mindset or cultural shift to broaden the focus of how traditional application delivery is executed. It focuses on the collaboration and communication - between departments of development, QA and IT operations.

Why is DevOps important?

Traditionally, within an organisation, delivery process is separated based on a department’s functionality and there is seldom interaction among functional specialised teams. This often leads to a longer timeline for application delivery to be executed and can also potentially create tension between departments.

Modern delivery pipelines are moving fast and the frequency of application deployment is speeding up, in order to match the demand in the market. How can we deliver applications faster, but remain low risk?

Adopting DevOps in an organisation promotes cross-department collaboration, communication and integration, to provide fast feedback loops between teams and continue learning and experimenting. This leads an organisation to have a shorter lead time when responding to market demands (new features or bug fixes), lower failure rate of deployment and faster mean time to recover from faults.

 

Open Space

In addition to the amazing scheduled talks from local and international speakers, there was a session called “Open Space” This is a free form of discussion/knowledge sharing/brainstorming contributed by all participants. Open space topics were suggested by anyone who had an ideas to share or were seeking solutions to a problem. The topics were then voted by attendees ,with the most voted topics being selected and allocated to dedicated rooms. Attendees were free to come and go between different rooms as they pleased.

 

What I learnt from the conference?
 

1. Interaction between different teams

Part of adopting DevOps practice is the journey of addressing confusion between teams. To do so, empathy and communication are needed when it comes to working with people whose mindsets are different from yours. This applies to not only between development and operation teams, but to all teams within an organisation.

Communication:listening to each other and repeating what you hear back to the person who talks will help ensure there is no packet lost between human TCP communication (more handshaking).
Empathy: figuring out what other need, by putting yourself into their shoes and also providing each side of the story, for better understanding. Continuous collaboration and sharing responsibility, provide benefit for both sides to learn and grow together.

 

2. What is the real issue that needs to be solved here?

Implementing new tools may help solve some technical problems, but without understanding the cause of the issue, the actual problem will last forever. To ask “why?’’ questions is the most powerful and straight forward way to identify what the real issue is from the stakeholders. It is also helping us to connect “what we are working on” to “whether the business will be benefit on this decision making”.

 

3. People

Humans are a very complicated distributed system. All other emerging complexities happen when adding humans as a factor.

We work with not only computers but human beings everyday and try to solve all sort of different problems. Having DevOps mindset helps us to work better toward each other, by switching mentality around and connecting with each other.

After all, the purpose of this conference was to provide our technical community a space to support and learn from each other. It was an inspiring experience for me to see and learn many ideas from people who are passionate about finding solutions, in order to help shape a better future work place, for all people involved in the field.

 

27 January 2017

Catalyst Blog

face

Catalyst Open Source Academy 2017

by Ian Beardslee

I want to talk (or is that blog?) about something that makes Catalyst a fantastic place to work.

Something that challenges the minds of the next generation.
Something that inspires Catalyst staff to learn and develop.
Something that helps keep me thinking and learning.
Something that makes changes around the world.

Photo of academy in progressRecently the Catalyst office in Wellington hosted 20 senior secondary school students taking part in our 7th Open Source Academy.

The Open Source Academy is two weeks of intense learning and mentored project work. Participants work towards making a change in a real open source project. It is an opportunity for a group of keen Year 11 to 13 students to be challenged beyond what they have learned at school by learning from, and working with some of the best technologists in New Zealand.

Our Open Source Academy has been well-attended since it was first run in 2011. This year was no exception - we received 47 applications for the 20 places.

What we do is not just about the ‘code’. It is about working with people, about starting with an idea and developing it into a look, a database, an interface, an application.

The first week of the Academy is taken up with learning. We use the process of creating a small application to build on previous sessions, to teach the development process and tools. We have our Business Analysts, Database Administrators, Project Managers, UX Designers, Testers, Visual Designers, Directors, System Administrators and Developers all sharing their knowledge and love for working with open source technology with our participants.

The students then apply and expand on what they’ve learned when they start working on a real open source project in week two. Often communicating with people around the world via IRC or bug tracking tools, our students go through the process of learning how an open source community works with it’s code and processes whilst making changes. Picture of Academy in progress Together with hard skills, they learn that sometimes a fixing a simple typo is just as important as building a new feature.

One of the final processes for the Academy is to get the students to tell the rest of the Academy what they did in their project team. This year, feedback from the project mentors was unanimous - the students downplayed how much they’d achieved in just three and a half days working on the project.

How many teenagers (or even adults, parents, teachers) have had a chance to make a change to an application that people around the world will use and benefit from?

Here’s what the students achieved:

Koha: Two of the students have patches in the latest release (16.11.02) that happened on Monday straight after the Academy. The rest of the students have their changes going through the process of testing and QA, and awaiting sign off. They saved 67 kittens, puppies, camels and/or villages.

Moodle: As a team, they built a plug-in for to report (and export) course completions. This is going to end up in the Moodle Plug-ins Directory.

Piwik: There were over 20 pull requests by the students with each of the team getting changes merged into the code base that will be the next release in about a month’s time.

SilverStripe: Over 14 pull requests covering things as simple as typo fixes to porting a module from SilverStripe 3 to SilverStripe 4.

It’s not just the many passionate Catalyst volunteers that make the Open Source Academy work. Our friends at InnoCraft, SilverStripe, Moodle and especially the Koha community are all important in their roles as mentors, taking the time to offer guidance to the students taking part.

With things like that going on each year, who else gets such an awesome start to the year by being inspired by a bunch of bright and keen secondary school students?

21 October 2016

Kristina Hoeppner

face

Getting the hang of hanging out (part 2)

A couple of days ago I experienced some some difficulties using YouTube Live Events. So today, I was all prepared:

  • Had my phone with me for 2-factor auth so I could log into my account on a second computer in order to paste links into the chat;
  • Prepared a document with all the links I wanted to paste;
  • Had the Hangout on my presenter computer running well ahead of time.

Indeed, I was done with my prep so much in advance that I had heaps of time and thus wanted to pause the broadcast as it looked like it was not actually broadcasting since I couldn’t see anything on the screen. So I thought I needed to adjust the broadcast’s start time.

Hence why I stopped the broadcast and as soon as I hit the button I knew I shouldn’t have. Stopping the broadcast doesn’t pause it, but stops it and kicks off the publishing process.

Yep, I panicked. I had about 10 minutes to go to my session and nobody could actually join it. Scrambling for a solution, I quickly set up another live event, tweeted the link and also sent it out to the Google+ group.

Then I changed the title of the just ended broadcast to something along the lines of “Go to description for new link”, put the link to the new stream into the description field and also in the chat as I had no other way of letting people know where I had gone and how they could join me.

I was so relieved when people showed up in the new event. That’s when the panic subsided, and I still had about 3 minutes to spare to the start of the session.

The good news? We released Mahara 16.10 and Mahara Mobile today (though actually, we soft-launched the app on the Google Play store already yesterday to ensure that it was live for today).

19 October 2016

Kristina Hoeppner

face

Getting the hang of hanging out (part 1)

Living in New Zealand, far, far away from the rest of the world (except maybe Australia), means that I’m doing a lot of online conference presentations, demonstrations, and meetings. I’ve become well-versed in a multitude of online meeting and conferencing software and know what works on Linux and what doesn’t.

The latter always give me a fright as I have to start up my VM and hope for the best that it will not die on me unexpectedly. Usually, closing Thunderbird and any browsers helps free some resources in order to let Windows start up. I can only dream of a world in which every conferencing software also runs on Linux.

Lately, some providers have gotten better and make use of WebRTC technology, which only requires a browser but no fancy additional software or flash. Only when I want to do screensharing do I need to install a plugin, which is done quickly.

So for meetings of fewer than 10 people, I’m usually set and can propose a nice solution like Jitsi, which works well. In the past, my go-to option was Firefox Hello for simple meetings, but that was taken off the market.

But what to do when there may be more than 10 people wanting to attend a session? Then it gets tough very quickly. So I have been trialling Google Hangouts on Air recently after I’ve seen David Bell use them successfully. It looked easy enough, but boy, was I in for a surprise.

Finding the dashboard

At some point, my YouTube account was switched to a “Creator Studio” one and so I can do live events. Google Hangouts on Air are now YouTube Live Events and need to be scheduled in YouTube.

There is no link from the YouTube homepage to the dashboard for uploading or managing content. I’d have thought that by clicking on “My channel” that I’d get somewhere, but far from it. There is nothing in the navigation.

The best choice is to click the “Video Manager” to get to a subpage of the creator area. Or, as I just found out, click your profile icon and then click the “Creator Studio” button.

Finding the creator dashboard

Getting to the creator dashboard either via the “Video Manager” on your channel or via the button under your profile picture.

Scheduling an event

Setting up an event is pretty straight forward as it’s like filling in the information for a video upload just with the added fields for event times.

Unfortunately, I haven’t found yet where I can change the placeholder for the video that is shown in the preview of the event on social media. It seems to set it to my channel’s banner image rather than allowing me to upload an event-specific image.

So once you have your event, you are good to go and can send people the link to it. The links that you get are only for the stream. They do not allow your viewers to actually join your hangout and communicate with you in there and that’s where it gets a bit bizarre and what prompted me to write this blog post so I can refer back to it in the future.

Different links for different hangouts

There is the hangout link and the YouTube event link

Streaming vs. Hangout

There are actually two components to the YouTube Live event (formerly known as Google Hangout on Air):

  1. The Hangout from which the presenter streams;
  2. The YouTube video stream that people watch.

In order to get into the Hangout, you click the “Start Hangout on Air” button on your YouTube events page. That takes you into a Google Hangout with the added buttons for the live event. You are supposed to see how many people joined in, but the count may be a bit off at times.

In that Google Hangout, you have all the usual functionality available of chats, screensharing, effects etc. You can also invite other people to join you in there. That will allow them to use the microphone. The interesting thing is that you can simply invite them via the regular Hangout invite. You can’t give them the link to the stream as they would not find the actual hangout. And if you only give people the link to the Hangout but not the stream, nobody will be in the stream.

Finding the relevant links in the hangout

You can also get the two different links from the hangout. Just make sure you get the correct one.

The YouTube video stream page only shows the content of the Hangout that is displayed in the video area, but not the chat. The live event has its separate chat that you can’t see in the Hangout! In order to see any comments your viewers make, you need to have the streaming page open and read the comments there.

In a way, it’s nice to keep the Hangout chat private because if you have other people join you in there as co-presenters, you can use that space to chat to each other without other viewers seeing what you type. However, it’s pretty inconvenient as you have to remember to check the other chat. Dealing with separate windows during a presentation can be daunting. It would be nicer to see the online chat also in the hangout window.

Today I even just fired up another computer and had the stream show there, which taught me another thing.

Having the stream on another computer also showed me how slow the connection was. The live event was at least 5 seconds behind if not more. That is something to consider when taking questions.

The stream was also very grainy. I was on a fast connection, but the default speed was on the lowest setting nevertheless. Fortunately, once I increased the resolution on the finished video, the video did get better. I don’t know if you could increase the setting during the stream.

Last but not least, I couldn’t present in full-screen mode as the window wouldn’t be recognized. I’ll have to try again and see if it works if I screenshare my entire desktop as it would be nicer not to show the browser toolbars.

Not sharing of links

When you are not the owner of the stream, you cannot post URLs. I’m pretty sure that is to prevent trolls misusing public YouTube events to post links. However, it’s pretty inconvenient for the rest who want to hold meetings and webinars and share content. You can’t post a single link. Only I as organizer could post links. Unfortunately, I found that out only after the event as I was logged in under a different account.

Being used to many other web conferencing software, I’ve come to like the backchannel and the possibility to post additional material, which are in many cases links, so people can simply click on them. This was impossible in the YouTube live event as I was only a regular user. And even had I logged in with my creator account, which I’ll certainly do during the next session on Friday, nobody else would have been able to post a link. That is very limiting. I wish it were possible to determine whether links were allowed or not.

Editing the stream

Once the event was over today, I went back to the video, but couldn’t find any editing tools. I started being discouraged as I had hoped to simply trim the front and the back a bit from non-essential chatter and then just keep the rest of the video online rather than trimming my local recording that I had done on top of the online recording, encoding that and uploading it. Before I could get sadder, I had to do some other work, and once I came back to the recording, I suddenly had all my regular editing tools available and rejoiced. Apparently, it takes a bit until all functionality is at your disposal.

So I trimmed the video, which was not easy, but I managed. And then it did its encoding online. After some time, the shortened recording was available and I didn’t have to send out a new link to the video. 🙂

Summing up

What does that mean for the next live event with YouTube events?

  1. Click the “Creator Studio” button under my Google / YouTube profile to get to the editor dashboard easily.
  2. Invite people who should have audio privileges through the Hangout rather than giving them the YouTube Live link, which is displayed more prominently.
    • Co-presenters are invited via Hangout.
    • Viewers get the YouTube live link.
  3. Open the YouTube Live event with the event creator account in order to be able to post links in the chat on YouTube. Have both the Hangout and the YouTube Live event open so you can see the online chat of those who aren’t in the Hangout.
  4. Take into account that there is a delay until the content is shown on YouTube.
  5. Once finished, wait a bit until all editing features are available and then go into post-production.

Remembering all these things will put me into a better position for the next webinar, which is a repeat session of today’s and showcases the new features of Mahara 16.10.

Update: Learn some more about YouTube Live events from my second webinar.

14 October 2016

Jonathan Harker

Learning the contrabass trombone

Wessex Contrabass in F and Shires bass trombone, side by side.

I’ve recently acquired a Wessex contrabass trombone in F. It is pretty much a knock-off of the Thein Ben van Dijk model, and compared to this gold standard of contrabass trombone, this instrument is about an eighth of the price and a perfectly decent instrument. It plays really well throughout the range and the slide, valves and bell are all of high build quality, unlike the notorious Chinese-made instruments of the past.

But really, this post is just an excuse to test out a nifty music notation WordPress plugin. The shorthand it uses is ABC which is a bit quaint compared to Lilypond, but it seems to work well enough. For instance, take the first scale we might learn on a contrabass trombone:

The contrabass trombone in F only has six positions on the open slide instead of seven. Furthermore, only the first five are actually practical, unless you are Tarzan, so we can play the G on the first (D) valve in third position. While the A is also theoretically available in first position on the D valve, it is indistinct and slightly flat. Play that shit on the open slide in fourth. Good. Now, how about an excerpt from Ein Alpensinfonie by Richard Strauss:

>>

Sounds good! Now, pop along to the NZSO performance in March 2017 to hear Shannon playing it, live in concert! In the meantime, here’s this exceprt by Berlin Philharmoniker:

11 October 2016

Kristina Hoeppner

face

Mahara Hui @ AUT recap

I’m playing catch-up and working my way backwards of my events. Yesterday, I wrote a bit about the NZ MoodleMoot on 5 October 2016. Just a day before that, AUT organized a local half-day Mahara Hui, Mahara Hui @ AUT 2016. Lisa Ransom and Shen Zhang from CfLAT (Centre for Learning and Teaching) were responsible for the event and did well wrangling everything and made all attendees feel welcome.

It was great to catch up with lecturers and learning technology support staff from AUT, Unitec and University of Waikato, and with a user from Nurseportfolio. We started the day out with introductions and examples of how people use Mahara.

Mahara in New Zealand tertiaries

At AUT, the CfLAT team trained about 630 students this academic year, in particular Public Policy, Tourism and Midwifery. Paramedics are also starting to use ePortfolios and can benefit from the long experience that Lisa and Shen have supporting other departments at AUT.

Linda reported that Mahara is now also being used in culinary studies in elective courses as well as degree papers. They use templates to help students get started, but then let them run with it. Portfolios are well suited for culinary students as they can showcase their work as well as document their creation progress and improve their work.

She also showcased a portfolio from a new lecturer who became a student in her area of expertise, going through a portfolio assignment with her students to see for herself how the portfolios worked and what she could and wanted to expect from her students. By going through the activity herself, she became an expert and now has a better understanding of the portfolio work.

John, an AUT practicum leader, who was new to AUT, came along to the hui and said that they were starting to use portfolios for their lesson plans and goals. Reflections are expected from the future teachers and form an important aspect. I’m sure we’ll hear more from him.

Sally from Nursing at AUT is looking at Mahara again, and the instructor could form connections directly with Unitec and Nurseportfolio, which is fantastic, because that’s what these hui are about: Connecting people.

JJ updated the group on the activities at Unitec. Medical imaging is going digital and looking into portfolios, and they also created a self-paced Moodle course on how to teach with Mahara effectively so that lecturers at Unitec can get a good overview.

Stephen from the University of Waikato gave an overview of the portfolio activities  at his university. Waikato still works with two systems, MyPortfolio.school.nz for education students becoming teachers, and the new Waikato-hosted Mahara site. Numerous faculties at Waikato now work with portfolios. If you’d like to find out more directly, you can watch recordings from the last WCELfest, in particular the presentations by Richard Edwards, Sue McCurdy and Stephen Bright. Portfolios will be used even more in the future as evidence from general papers will need to be collected in them by every student.

We also discussed a couple of ideas from a lecturer and are interested in other people’s opinion on them. One idea was to be able to share portfolios more easily in social networks and then see directly when the portfolio was updated and share those news again. The other idea was to show people who are interested in the portfolios when new content has been added. The latter is already possible to a degree with the watchlist. However, there students or lecturers still need to put specific pages on the watchlist first rather than the changes coming to them. The enhancements that Gregor is planning for the watchlist goes more in that direction.

Mahara 16.10

In a second part of the hui, I presented the new features of Mahara 16.10, and we spent a bit of time on taking a closer look at SmartEvidence.

I’m very excited that this new version will be live very soon and look forward to the feedback by users on how SmartEvidence works out for them. It’s the initial implementation. While it doesn’t contain all the bells and whistles, I think it is a great beginning to get the conversations started around use cases besides the ones we had and see how flexible it is.

Next hui and online meetings

If you want to share how you are using Mahara, you’ll have the opportunity to do so in Wellington on 27 October 2016 when we’ll have another local Mahara Hui, Mahara Hui @ Catalyst. From 5 to 7 April 2017, we are planning a bigger Mahara Hui again in Auckland. More information will be shared soon on the Mahara Hui website.

There will also be two MUGOZ online meetings on 19 and 21 October 2016 in which I’ll be presenting the new Mahara 16.10 features. You are welcome to attend either of these 1-hour sessions organized by the Australian Mahara User Group. Since the sessions are online, anybody can tune in.

15 August 2016

Jonathan Harker

Verona

Ah… fair Verona, such a lovely town. We arrived by train, after a vaporetto ride to the Venice Santa Lucia train station.

One highlight of the trip we had both been looking forward to was the opportunity to see Turandot, the magnificent opera by Puccini, performed at the Verona Arena. In preparation for the evening ahead, in the afternoon we went to the Maria Callas exhibition, which was truly extraordinary. Her costumes, posters, photographs, props, newspaper clippings and so on, all about her life and career at La Scala in Milan, and elsewhere, with an excellent audio guide which interspersed commentary with recordings of her performing opera arias. In the context of the tragic events in her life, hearing her voice and her incredible performances was exquisitely poignant and moving.

Puccini’s Turandot was amazing. It was a Zeffirelli production, so the sets and costumes were fantastic. The chorus we estimated to be at least 150 singers, which is far larger than typical New Zealand opera performances I’ve played in or been to. The sound of the full chorus at fortissimo was simply astonishing, and it also meant that the conductor, rather than giving the orchestra the hand in order to plead restraint, was instead egging the brass on, simply in order to be heard. The result was an absolutely thrilling, intense and unforgettable sound. Went for a pizza and a cheeky prosecco afterwards!

The next day, we went on a wine tour of the local Valpolicella region, organised with Pagus Tours. We visited three wineries in the region, the first was a fairly large producer, the second a smaller family business with a beautiful cellar door building, where we had lunch. The only other people on the tour was a couple Rob and Angela from Florida, who were excellent company.

There are four notable DOC and DOCG wines of the Valpolicella region, made from predominantly the corvina grape, but with varying amounts of corvinone, rondinella, molinara, and other local grape varieties.

Valpolicella (sometimes Valpolicella Rosso) is a DOC red wine made in a light style without oak, for day time summer drinking, rather like a rosé. There is also Valpolicella Classico, a historical denomination indicating just that it comes from one of five townships to the west of the region. Valpolicella Superiore is fermented for longer for a heavier style red wine, and aged in oak for at least a year.

The DOCG wine that has been made traditionally in the Valpolicella region since Roman times is Recioto, a sweet red wine. The grapes are harvested very late and dried on racks for up to three months in order to concentrate the juice and flavour. They will have lost nearly half their weight in water, and the juice is then fermented and cut short to produce a very sweet wine. Until the mid-20th Century, this was by far the predominant wine made in the region. We tried a couple of very good recioto wines, and they are very crisp and fruit driven, without the raisin or prune overtones of port.

In the 1970s a method of winemaking emerged called Ripasso, meaning “re-passed”. Now a DOC wine in itself, it is a Valpolicella wine with the pomace from a Recioto or Amarone added back in, for a second round of maceration and further fermentation. This produces a more robust, darker and beautifully complex wine, which must be aged in oak for a minimum of two years.

Finally, a wine called Amarone emerged in the mid 20th Century. Now a DOCG wine, a good story is that it resulted from barrels of recioto abandoned during World War Two that were left to fully ferment. Although this story may be somewhat fanciful, such wine having probably been produced in the past, modern Amarone began to be deliberately produced only since the 1950s. Amarone is a very strong, highly alcoholic, strongly oaked, full and complex wine, made from dried late harvest red grapes as per a Recioto. For the DOCG it must be aged for three years in oak, and many makers age it for longer still.

After lunch we visited the tour guide’s family business, Damoli, which makes a stunning 2006 “Checo” amarone. We were forced to buy six bottles to bring back in the luggage.

After all that hard work, and a nap at the hotel, we met up with Rob and Angela again at an excellent restaurant, Trattoria Tre Marchetti, for a four course degustation. It was entirely decadent and well-deserved; ham with an apricot and cherry marinaded in mustard and amarone, which gave them a horseradish kick. A porcini and black truffle fettucine, then braised veal cheek in jus, with buttered potatoes and a little pressed spoon of suviche zucchini and carrot. And a plate of miniature dessert pastries. With a great Superiore, a Zenato Ripasso, and finally a Recioto, which we dared to try with the red meat; it was surprising and fantastic. The wine was offered in an array of wineglasses including Murano glass, and one for the Ripasso that was seriously the size of my face. And as anyone who knows me knows, I have a big face.

The following day it was hot, and we felt a little tired and lazy, so we had pizza with soppressata, olives, capers and big white anchoves from a good outdoor pizzeria for lunch, and read our books for a while. We had a look around the fortress museum, lots of devotional art, frescoes and statues of the Virgin Mary.

Interesting different things about Italy #49: bathroom taps are often operated by a foot pedal.

At 5.30 we jumped on the train again. Next up, Bologna, less than an hour away, where we just had great fresh pasta with Bolognese ragù for dinner at a Chinese-run canteen for only €5, since almost nothing else was open. But that’s another story for another post!

11 August 2016

Jonathan Harker

Venice

Traveling in style on Italotreno at 300 km/h.

On Monday, we checked out of our hotel in Rome, and jumped on a train to Venice. This Italotreno train zoomed along at up to 300 km/h in places, in a comfortable quiet cabin with free WiFi. On arrival, we find the main island of Venice rammed with tourists, not helped by an illusion of density; the “streets” are very narrow. There are no roads, only canals and pedestrian footpaths; no vehicles, save for hand-pulled carts for delivering goods, and of course the famous Venetian gondolas.

Our hotel room was teeny-tiny but manageable, in a Venetian house that had been converted into a hotel. We were to later learn this to be common; the only “locals” we saw were the business owners, tour guides, restaurant staff and the like. One guide told us that not many locals can afford to live on the main island any more due to the increasing costs of maintenance and insurance as the island subsides (and the sea level rises), and the sky-high rents, driven in part by highly lucrative tourism. Most apart from the mega-rich now live on the mainland, and if they still own property on the island, let it out through Airbnb; much of the housing in Venice is now hotels, or even empty.

After settling in we found what turned out to be an awful place for tea, Aciugheta. The service was terrible and they served tired, microwaved frozen seafood on chewy packet pasta.

The next day we found Magna Bevi Tasi, a lovely place to start the day with coffee and a panini in the morning, on the square next to the hotel. From there we embarked on a walking tour around San Marco, and a visit to St. Mark’s and the Doge’s Palace. In the afternoon we went on the Vaporetto (water bus) out to the island of Murano, with its glass factories that produce the famous and beautiful Venetian glass, and Burano, famous for its lace. By this point the relentless crowds of tourists everywhere drove me to escape down a quiet alleyway, where I found, only a hundred yards away, an oasis of quiet and colourful residential Venice, and managed a few idiot-free photographs.

That night we found Antica Osteria ai Tre Leone for tea, a lovely place right next to the Bridge Hotel with good pasta and wine list.

On Wednesday it was Bek’s birthday, and so we ventured out into the local countryside on a Prosecco tour. What better way to spend a birthday than drinking nice sparkling wine in the sun?

Prosecco vineyards near Valdobbiadene, Italy.

We went to three different Prosecco wineries, all producing Prosecco Superiore DOCG, which is sparkling wine made in a region around the towns of Conegliano and Valdobbiadene, from the Prosecco grape. The first was Toffoli, near Conegliano. They make a really good millesimato, extra dry. We mailed six magnums home to make use of the good NZ shipping rate. Happy birthday! These will come in handy for special occasions. We also got chatting to one of the staff who came to New Zealand for a couple of years and worked in Marlborough vineyards, and he very kindly gave us a couple of bottles of their sparkling rosé, made from the Marzemino grape.

A prosecco vending machine, in the middle of a vineyard.

On the way to the next vineyard in Valdobbiadene, where we had lunch, we stopped by a prosecco vending machine, out in the middle of the vineyard on a hill, with chairs and tables nearby. The whole thing runs as some sort of local honesty system. Just amazing!

Prosecco Superiore DOCG from Villa Sandi.

The third vineyard was Villa Sandi, which also had a lovely extra dry Prosecco Superiore.

After a hard day of drinking sparkling wine and being chauffeured around the Italian countryside, we went to the Tre Leone again for dinner. It’s a good restaurant, and it was very crowded and tiresome to try to find another good one.

The next day we decided to take it easy. The main island of Venice was relentlessly busy with tourists, so we emerged about midday and decided to bob along Canal Grande on the Vaporetto for about an hour and a half, until we ended up at the Lido.

Here we reclined on the beach for a while with Prosecco and spritz. Dear reader, if you want to visit Venice in the height of summer, go to the island of Lido – it is refreshingly free of tourist crowds and has good local cafés and restaurants. We went to a nice outdoor place for dinner on Gran Viale San Maria Elisabetta with good food and service.

Tomorrow we catch the train to Verona!

24 July 2016

Andrew Ruthven

Allow forwarding from VoiceMail to cellphones

Something I've been wanting to do with our Asterisk PBX at Catalyst for a while is to allow having callers that hit VoiceMail to be forwarded the callee's cellphone if allowed. As part of an Asterisk migration we're currently carrying out I finally decided to investigate what is involved. One of the nice things about the VoiceMail application in Asterisk is that callers can hit 0 for the operator, or * for some other purpose. I decided to use * for this purpose.

I'm going to assume a working knowledge of Asterisk dial plans, and I'm not going to try and explain how it works. Sorry.

When a caller hits * the VoiceMail application exits and looks for a rule that matches a. Now, the simple approach looks like this within our macro for handling standard extensions:

[macro-stdexten]
...
exten => a,1,Goto(pstn,027xxx,1)
...

(Where I have a context called pstn for placing calls out to the PSTN).

This'll work, but anyone who hits * will be forwarded to my cellphone. Not what I want. Instead we need to get the dialled extension into a place where we can perform extension matching on it. So instead we'll have this (the extension is passed into macro-stdexten as the first variable - ARG1):

[macro-stdexten]
...
exten => a,1,Goto(vmfwd,${ARG1},1)
...

Then we can create a new context called vmfwd with extension matching (my extension is 7231):

[vmfwd]
exten => 7231,1,Goto(pstn,027xxx,1)

I actually have a bit more in there to do some logging and set the caller ID to something our SIP provider will accept, but you get the gist of it. All I need to do is to arrange for a rule per extension that is allowed to have their VoiceMail callers be forwarded to voicemail. Fortunately I have that part automated.

The only catch is for extensions that aren't allowed to be forwarded to a cellphone. If someone calling their VoiceMail hits * their call will be hung up and I get nasty log messages about no rule for them. How do we handle them? Well, we send them back to VoiceMail. In the vmfwd context we add a rule like this:

exten => _XXXX,1,VoiceMail(${EXTEN}@sip,${voicemail_option})
  same => n,Hangup

So any extension that isn't otherwise matched hits this rule. We use ${voicemail_option} so that we can use the same mode as was used previously.

Easy! Naturally this approach won't work for other people trying to do this, but given I couldn't find write ups on how to do this, I thought it be might be useful to others.

Here's my macro-stdexten and vmfwd in full:

[macro-stdexten]
exten => s,1,Progress()
exten => s,n,Dial(${ARG2},20)
exten => s,n,Goto(s-${DIALSTATUS},1)
exten => s-NOANSWER,1,Answer
exten => s-NOANSWER,n,Wait(1)
exten => s-NOANSWER,n,Set(voicemail_option=u)
exten => s-NOANSWER,n,Voicemail(${ARG1}@sip,u)
exten => s-NOANSWER,n,Hangup
exten => s-BUSY,1,Answer
exten => s-BUSY,n,Wait(1)
exten => s-BUSY,n,Set(voicemail_option=b)
exten => s-BUSY,n,Voicemail(${ARG1}@sip,b)
exten => s-BUSY,n,Hangup
exten => _s-.,1,Goto(s-NOANSWER,1)
exten => a,1,Goto(vmfwd,${ARG1},1)
exten => o,1,Macro(operator)

[vmfwd]

exten => _XXXX,1,VoiceMail(${EXTEN}@sip,${voicemail_option})
  same => n,Hangup

#include extensions-vmfwd-auto.conf

And I then build extensions-vmfwd-auto.conf from a script that is used to generate configuration files for defining accounts, other dial plan rule entries and phone provisioning files.

With thanks to John Kiniston for the suggestion about the wildcard entry in vmfwd.

02 December 2014

Andrew Ruthven

LCA2015 - Debian Miniconf & nz2015 Debian mini-DebConf

nz2015 mini-DebConf

Already attending linux.conf.au? Come a couple of days earlier and attend the mini-DebConf too! There will be a day of talks with a strong focus on the Debian project and a bug squashing day.

Debian Miniconf

After 5 years, the Debian Miniconf is back! Run as part of linux.conf.au 2015, this event will attract speakers talking on topics that suit the broader audience attending LCA. The Debian Miniconf has been one of the largest miniconfs in the history of linux.conf.au.

For more information about both these events which I'm organising, head over to: nz2015.mini.debconf.org!

25 August 2014

Dan Marsden

SCORM hot topics.

As a follow up from the GSOC post I thought it might be useful to mention a few things happening with SCORM at the moment.

There are currently approx 71 open issues related to SCORM in the Moodle tracker at the moment, of those 38 are classed as bugs/issues I should fix in stable branches at some point, 33 are issues that are really feature/improvement requests.

Issues about to be fixed and under development
MDL-46639 – External AICC packages not working correctly.
MDL-44548 – SCORM Repository auto-update not working.

Issues that are  high in my list of things to look at and I hope to look at sometime soon.
MDL-46961 – SCORM player not launching in Firefox when new window being used.
MDL-46782 – Re-entry of a scorm not using suspend_data or resuming itself should allow returning to the first sco that is not complete.
MDL-45949 – The TOC Tree isn’t quite working as it should after our conversion to YUI3 – it isn’t expanding/collapsing in a logical manner – could be a bit of work here to make this work in the right way.

Issues recently fixed in stable releases.
MDL-46940 – new window option not working when preview mode disabled.
MDL-46236 – Start new attempt option ignored if new window used.
MDL-45726 – incorrect handling of review mode.

New improvements you might not have noticed in 2.8 (not released yet)
MDL-35870 -Performance improvements to SCORM
MDL-37401 -SCORM auto-commit – allows Moodle to save data periodically even if the SCORM doesn’t call “commit”

New improvements you might not have noticed in 2.7:
MDL-28261 -Check for live internet connectivity while using SCORM – warns user if SCORM is unable to communicate with the LMS.
MDL-41476 – The SCORM spec defines a small amount of data that can be stored when using SCORM 1.2 packages, we have added a setting that allows you to disable this restriction within Moodle to allow larger amounts of data to be stored (you may need to modify your SCORM package to send more data to make this work.)

Thanks to Ian Wild, Martin Holden, Tony O’Neill, Peter Bowen, André Mendes, Matteo Scaramuccia, Ray Morris, Vignesh, Hansen Ler, Faisal Kaleem and many other people who have helped report/test and suggest fixes related to SCORM recently including the Moodle HQ Integration team (Eloy, Sam, Marina, Dan, Damyon, Rajesh) who have all been on the receiving end of reviewing some SCORM patches recently!

GSOC 2014 update

Another year of GSOC has just finished and Vignesh has done a great job helping us to improve a number of areas of SCORM!
I’m really glad to finally have some changes made to the JavaScript datamodel files as part of MDL-35870 – I’m hoping this will improve the performance of the SCORM player as the JavaScript can now be cached properly by the users browser rather than dynamically generating it using PHP.

Vignesh has made a number of general bug fixes to the SCORM code and has also tidied up the code in the 2.8 branch so that it now complies with Moodle’s coding guidelines.

These changes have involved almost every single file in the SCORM module and significant architectural changes have been made. We’ve done our best to avoid regresssions (thanks Ray for testing SCORM 2004) but due to the large number of changes (and the fact that we only have 1 behat test for SCORM) It would be really great if people could test the 2.8 branch with their SCORM content before release so we can pick up any other regressions that may have occurred.

Thanks heaps to Vignesh for his hard work on SCORM during GSOC – and kudos to Google for running a great program and providing the funding to help it happen!

10 July 2014

Andrew Ruthven

Cloud - in New Zealand!

I've spent a reasonable chunk of the past year working on a project we launched last month, Catalyst Cloud! It is using OpenStack with Ceph as the object store. It has taken a lot of work, and it is now very exciting seeing the level of interest there we're receiving about this new service!

The great part of this is that we can now offer private cloud services to our customers which provides all the flexibility that we've come to expect with the "cloud", but hosted in New Zealand by a New Zealand owned company so no concerns about jurisdiction of your data! Not only are we able to offer private cloud services on our OpenStack cluster(s), but we can also deploy OpenStack onto our customers own hardware using our ProdStack solution (I get to look directly at the Dashboard shown on that page, which is pretty cool).

Next up is deploying another OpenStack cluster in our new data centre (which is another project I'm working on). In the near future we also hope to start using Open Compute Project hardware for our clusters.

Dan Marsden

Goodbye Turnitin…

Time to say goodbye to the “Dan Marsden Turnitin plugin”… well almost!

Turnitin have done a pretty good job of developing a new plugin to replace the code that I have been working on since Moodle 1.5!

The new version of their plugin contains 3 components:

  1. A module (called turnitintool2) which contains the majority of the code for connecting to their new API and is a self-contained activity like their old “turnitintool” plugin
  2. A replacement plugin for mine (plagiarism_turnitin) which allows you to use plagiarism features within the existing Moodle Assignment, Workshop and forum modules.
  3. A new Moodle block that works with both the above plugins.

The Moodle.org Plugins database entry has been updated to replace my old code with the latest version from Turnitin, we have a number of clients at Catalyst using the new plugin and the migration has mostly gone ok so far – there are a few minor differences between my plugin and the new version from Turnitin so I encourage everyone to test the upgrade to the new version before running it on their production sites.

I’m encouraging most of our clients to update to the new plugin at the end of this year but I will continue to provide basic support for my version running on all Moodle versions up to Moodle 2.7 and my code continues to be available from my github repository here:
https://github.com/danmarsden/moodle-plagiarism_turnitin

Thanks to everyone who has helped in the past with the plugin I wrote – hopefully this new version from Turnitin will meet everyone’s needs!

31 October 2012

Chris Cormack

Signoff statistics for October 2012

Here are the signoff statistics for bugs in October 2012
  • Kyle M Hall- 24
  • Owen Leonard- 18
  • Chris Cormack- 15
  • Nicole C. Engard- 10
  • Mirko Tietgen- 9
  • Marc Véron- 6
  • Frédéric Demians- 5
  • Jared Camins-Esakov- 5
  • Magnus Enger- 4
  • Jonathan Druart- 4
  • M. de Rooy- 3
  • Melia Meggs- 3
  • wajasu- 2
  • Paul Poulain- 2
  • Fridolyn SOMERS- 2
  • Tomás Cohen Arazi- 2
  • Matthias Meusburger- 1
  • Katrin Fischer- 1
  • Julian Maurice- 1
  • Koha Team Lyon 3- 1
  • Mason James- 1
  • Elliott Davis- 1
  • mathieu saby- 1
  • Robin Sheat- 1

16 October 2012

Chris Cormack

Unsung heroes of Koha 26 – The Ada Lovelace Day Edition

Darla Grediagin

Darla has been using Koha from 2006, for the Bering Strait School District in Alaska. This is pretty neat in itself, what is cooler is that as far as I know, they have never had a ‘Support Contract’. Doing things either by themselves or with the help of IT personnel as needed. One of Darla’s first blogposts that I read was about her struggles trying to install Debian on an Emac. I totally respect anyone who is trying to reclaim hardware from the darkside 🙂

Darla has presented on Koha at conferences, and maintains a blog that has useful information, including sections of what she would do differently. As well as some nice feel good bits like this, from April 2007

I know I had an entry titled this before, but I do love OSS programs.   Yesterday I mentioned that I would look at Pines because I like the tool it has to merge MARC records.  Today a Koha developer emailed me to let me know that he is working on this for Koha and it should be available soon.  I can’t imagine getting that kind of service from a vendor.

Hopefully she will be able to make it Kohacon13 in Reno, NV. It would be great to put a face to the email address 🙂

10 October 2012

Chris Cormack

New Release team for Koha 3.12

Last night on IRC the Koha Community elected a new release team, for the 3.12 release. Once again it is a nicely mixed team, there are 16 people involved, from  8 different countries (India, New Zealand, USA, Norway, Germany, France, Netherlands, Switzerland) and four of the 12 roles are filled by women.

The release team will be working super hard to bring you the best release of Koha yet, and you can help:

  • Reporting bugs
  • Testing bug fixes
  • Writing up enhancement requests
  • Using Koha
  • Sending cookies
  • Inventing time travel
  • Killing MARC
  • Winning the lottery and donating the proceeds to the trust to use for Koha work.

24 July 2012

Pass the Source

Google Recruiting

So, Google are recruiting again. From the open source community, obviously. It’s where to find all the good developers.

Here’s the suggestion I made on how they can really get in front of FOSS developers:

Hi [name]

Just a quick note to thank you for getting in touch of so many our
Catalyst IT staff, both here and in Australia, with job offers. It comes
across as a real compliment to our company that the folks that work here
are considered worthy of Google’s attention.

One thing about most of our staff is that they *love* open source. Can I
suggest, therefore, that one of the best ways for Google to demonstrate
its commitment to FOSS and FOSS developers this year would be to be a
sponsor of the NZ Open Source Awards. These have been very successful at
celebrating and recognising the achievements of FOSS developers,
projects and users. This year there is even an “Open Science” category.

Google has been a past sponsor of the event and it would be good to see
you commit to it again.

For more information see:

http://www.nzosa.org.nz/

Many thanks
Don

09 July 2012

Andrew Caudwell

Inventing On Principle Applied to Shader Editing

Recently I have been playing around with GLSL Sanbox (github), a what-you-see-is-what-you-get shader editor that runs in any WebGL capable browser (such as Firefox, Chrome and Safari). It gives you a transparent editor pane in the foreground and the resulting compiled fragment shader rendered behind it. Code is recompiled dynamically as the code changes. The latest version even has syntax and error highlighting, even bracket matching.

There have been a few other Webgl based shader editors like this in the past such as Shader Toy by Iñigo Quílez (aka IQ of Demo Scene group RGBA) and his more recent (though I believe unpublished) editor used in his fascinating live coding videos.

Finished compositions are published to a gallery with the source code attached, and can be ‘forked’ to create additional works. Generally the author will leave their twitter account name in the source code.

I have been trying to get to grips with some more advanced raycasting concepts, and being able to code something up in sandbox and see the effect of every change is immensely useful.

Below are a bunch of my GLSL sandbox creations (batman symbol added by @emackey):

    

    

GLSL Sandbox is just the latest example of the merit of software development tools that provide immediate feedback, and highlights the major advantages of scripting languages have over heavy compiled languages with long build and linking times that make experimentation costly and tedious. Inventing on Principle, a presentation by Bret Victor, is a great introduction to this topic.

I would really like a save draft button that saves shaders locally so I have some place to save things that are a work in progress, I might have to look at how I can add this.

Update: Fixed links to point at glslsandbox.com.

05 June 2012

Pass the Source

Wellington City Council Verbal Submission

I made the following submission on the Council’s Draft Long Term Plan. Some of this related to FLOSS. This was a 3 minute slot with 2 minutes for questions from the councillors.

Introduction

I have been a Wellington inhabitant for 22 years and am a business owner. We employ about 140 staff in Wellington, with offices in Christchurch, Sydney, Brisbane and the UK. I am also co-chair of NZRise which represents NZ owned IT businesses.

I have 3 Points to make in 3 minutes.

1. The Long Term plan lacks vision and is a plan for stagnation and erosion

It focuses on selling assets, such as community halls and council operations and postponing investments. On reducing public services such as libraries and museums and increasing user costs. This will not create a city where “talent wants to live”. With this plan who would have thought the citizens of the city had elected a Green Mayor?

Money speaks louder than words. Both borrowing levels and proposed rate increases are minimal and show a lack of investment in the city, its inhabitants and our future.

My company is about to open an office in Auckland. A manager was recently surveying staff about team allocation and noted, as an aside, that between 10 and 20 Wellington staff would move to Auckland given the opportunity. We are not simply competing with Australia for hearts and minds, we are competing with Auckland whose plans for investment are much higher than our own.

2. Show faith in local companies

The best way to encourage economic growth is to show faith in the talent that actually lives here and pays your rates. This means making sure the council staff have a strong direction and mandate to procure locally. In particular the procurement process needs to be overhauled to make sure it does not exclude SME’s (our backbone) from bidding for work (see this NZCS story). It needs to be streamlined, transparent and efficient.

A way of achieving local company participation in this is through disaggregation – the breaking up large-scale initiatives into smaller, more manageable components. For the following reasons:

  • It improves project success rates, which helps the public sector be more effective.
  • It reduces project cost, which benefits the taxpayers.
  • It invites small business, which stimulates the economy.

3. Smart cities are open source cities

Use open source software as the default.

It has been clear for a long time that open source software is the most cost effective way to deliver IT services. It works for Amazon, Facebook, Red Hat and Google and just about every major Silicon Valley success since the advent of the internet. Open source drives the internet and these companies because it has an infinitely scalable licensing and model – free. Studies, such as the one I have here from the London School of Economics, show the cost effectiveness and innovation that comes with open source.

It pains me to hear about proposals to save money by reducing libraries hours and increasing fees, when the amount of money being saved is less than the annual software licence fees our libraries pay, when world beating free alternatives exist.

This has to change, looking round the globe it is the visionary and successful local councils that are mandating the use of FLOSS, from Munich to Vancouver to Raleigh NC to Paris to San Francisco.

As well as saving money, open source brings a state of mind. That is:

  • Willingness to share and collaborate
  • Willingness to receive information
  • The right attitude to be innovative, creative, and try new things

Thank you. There should now be 2 minutes left for questions.

05 January 2012

Pass the Source

The Real Tablet Wars

tl;dr formally known as Executive Summary, Openness + Good Taste Wins

Gosh, it’s been a while. But this site is not dead. Just been distracted by indenti.ca and twitter.

I was going to write about Apple, again. A result of unexpected and unwelcome exposure to an iPad over the Christmas Holidays. But then I read Jethro Carr’s excellent post where he describes trying to build the Android OS from Google’s open source code base. He quite mercilessly exposes the lack of “open” in some key areas of that platform.

It is more useful to look at the topic as an issue of “open” vs “closed” where iPad is one example of the latter. But, increasingly, Android platforms are beginning to display similar inane closed attributes – to the disadvantage of users.

Part of my summer break was spent helping out at the premier junior sailing regatta in the world, this year held in Napier, NZ. Catalyst, as a sponsor, has built and is hosting the official website.

I had expected to swan around, sunbathing, drinking cocktails and soaking up some atmosphere. Instead a last minute request for a new “live” blogging section had me blundering around Joomla and all sorts of other technology with which I am happily unfamiliar. Days and nightmares of iPads, Windows, wireless hotspots and offshore GSM coverage.

The plan was simple, the specialist blogger, himself a world renown sailor, would take his tablet device out on the water on the spectator boat. From there he would watch and blog starts, racing, finishes and anguished reactions from parents (if there is one thing that unites races and nationalities, it is parental anguish over sporting achievement).

We had a problem in that the web browser on the tablet didn’t work with the web based text editor used in the Joomla CMS. That had me scurrying around for a replacement to the tinyMCE plugin, just the most common browser based editing tool. But a quick scan around various forums showed me that the alternative editors were not a solution and that the real issue was a bug with the client browser.

“No problem”, I thought. “Let’s install Firefox, I know that works”.

But no, Firefox is not available to iPad users  and Apple likes to “protect” its users by only tightly controlling whose applications are allowed to run on the tablet. Ok, what about Chrome? Same deal. You *have* to use Apple’s own buggy browser, it’s for your own good.

Someone suggested that the iPad’s operating system we were using needed upgrading and the new version might have a fixed browser. No, we couldn’t do that because we didn’t have Apple’s music playing software, iTunes, on a PC. Fortunately Vodafone were also a sponsor and not only did they download an upgrade they had iTunes handy. Only problem, the upgrade wiped all the apps that our blogger and his family had previously bought and installed.

Er, and the upgrade failed to fix the problem. One day gone.

So a laptop was press ganged into action, which, in the end was a blessing because other trials later showed that typing blogs fast, on an ocean swell, is very hard without a real keyboard. All those people pushing tablets at schools, keep in mind it is good to have our children *write* stuff, often.

The point of this post is not really to bag Apple, but to bag the mentality that stops people using their own devices in ways that help them through the day. I only wanted to try a different browser to Safari, not an unusual thing to do. Someone else might want to try out a useful little application a friend has written for them, but that wouldn’t be allowed.

But the worst aspect of this is that because of Apple’s success in creating well designed gadgets other companies have decided that “closed” is also the correct approach to take with their products. This is crazy. It was an open platform, Linux Kernel with Android, that allowed them to compete with Apple in the first place and there is no doubt that when given a choice, choice is what people want – assuming “taste” requirements are met.

Other things being equal*, who is going to chose a platform where the company that sold you a neat little gadget controls all the things you do on it? But there is a strong trend by manufacturers such as Samsung, and even Linux distributions, such asUbuntu, to start placing restrictions on their clients and users. To decide for all of us how we should behave and operate *our* equipment.

The explosive success of the personal computer was that it was *personal*. It was your own productivity, life enhancing device. And the explosive success of DOS and Windows was that, with some notable exceptions, Microsoft didn’t try and stop users installing third party applications. The dance monkey boy video is funny, but the truth is that Microsoft did want “developers, developers, developers, developers” using its platforms because, at the time, it knew it didn’t know everything.

Apple, Android handset manufacturers and even Canonical (Ubuntu) are falling into the trap of not knowing that there is stuff they don’t know and they will probably never know. Similar charges are now being made about Facebook and Twitter. The really useful devices and software will be coming from companies and individuals who realise that whilst most of what we all do is the same as what everyone else does, it is the stuff that we do differently that makes us unique and that we need to control and manage for ourselves. Allow us do that, with taste, and you’ll be a winner.

PS I should also say “thanks” fellow sponsors Chris Devine and Devine Computing for just making stuff work.

* I know all is not equal. Apple’s competitive advantage it “has taste” but not in its restrictions.

18 May 2011

Andrew Caudwell

Show Your True Colours

This last week saw the release of fairly significant update to Gource – replacing the out dated, 3DFX-era rendering code, with something a bit more modern, utilizing more recent OpenGL features like GLSL pixel shaders and VBOs.

A lot of the improvements are under the hood, but the first thing you’ll probably notice is the elimination of banding artifacts in Bloom, the illuminated fog Gource places around directories. This effect is pretty tough on the ‘colour space’ of so called Truecolor, the maximum colour depth on consumer monitors and display devices, which provides 256 different shades of grey to play with.

When you render a gradient across the screen, there are 3 or 4 times more pixels than there are shades of each colour, producing visible ‘bands’ of the same shade. If multiple gradients like this get blended together, as happens with bloom, you simply run out of ‘in between’ colours and the issue becomes more exaggerated, as seen below (contrast adjusted for emphasis):

        

Those aren’t compression artifacts you’re seeing!

Gource now uses colour diffusion to combat this problem. Instead of sampling the exact gradient of bloom for the distance of a pixel from the centre of a directory, we take a fuzzy sample in that vicinity instead. When zoomed in, you can see the picture is now slightly noisy, but the banding is completely eliminated. Viewed at the intended resolution, you can’t really see the trickery going on – in fact the effect even seems somewhat more natural, a bit closer to how light bouncing off particles of mist would actually behave.

        

The other improvement is speed – everything is now drawn with VBOs, large batches of objects geometry passed to the GPU in as few shipments as possible, eliminating CPU and IO bottle necks. Shadows cast by files and users are now done in a second pass on GPU using the same geometry as used for the lit pass – making them really cheap compared to before when we effectively wore the cost of having to draw the whole scene twice.

Text is now drawn in single pass, including shadows, using some fragment shader magic (take two samples of the font texture, offset by 1-by-1 pixels, blend appropriately). Given the ridiculous amount of file, user and directory names Gource draws at once with some projects (Linux Kernel Git import commit, I’m looking at you), doing half as much work there makes a big difference.

06 October 2010

Andrew Caudwell

New Zealand Open Source Awards

I discovered today that Gource is a finalist in the Contributor category for the NZOSA awards. Exciting stuff! A full list of nominations is here.

I’m currently taking a working holiday to make some progress on a short film presentation of Gource for the Onward!.

Update: here’s the video presented at Onward!:

Craig Anslow presented the video on my behalf (thanks again Craig!), and we did a short Q/A over Skype afterwards. The music in the video is Aksjomat przemijania (Axiom of going by) by Dieter Werner. I suggest checking out his other work!

14 August 2009

Piers Harding

Auth SAML 2.0 for Mahara

Following on from the SAML 2.0 work that I've done recently for Moodle, I thought it was useful to do the same for the Mahara ePortfolio service, while I was in the same space. Details of the first release can be found here, with tested version for both trunk, and 1.1_STABLE.

02 August 2009

Piers Harding

Moodle and SAML 2.0 Web SSO

Of late I have been doing a lot of SSO integration work for the NZ Ministry of Education, and during this time I came across an excellent project FEIDE. One of the off shoots of this has been the development of a high quality PHP library for SAML 2.0 Web SSO - SimpleSAMLPHP.

For Moodle integration, Erlend Strømsvik of Ny Media AS, developed an authentication plugin, which I've made a number of changes to around configuration options, and Moodle session integration. This has now been documented and added to Moodle Contrib to give it better visibility to the Moodle community at large. Documentation is here and the contrib entry is here.

27 June 2009

Piers Harding

Perl sapnwrfc 0.30

I doing some work for a client recently, I got the opportunity to do some major performance work on sapnwrfc for Perl. The net result is that a number of memory leaks, mainly of Perl values not going out of scope properly, have been fixed.

Additionally, I've had some time to put together a proper cookbook style set of examples in the sapnwrfc-cookbook. These examples, while specifically for Perl, are almost identical for sapnwrfc for Python, Ruby, and PHP too.