When rubbish enters the ocean what happens? Oceanographer Dr Erik Van Sebille says: “The plastic joins other rubbish ... and is eaten by thousands of sea animals, birds and fish who mistake the plastic for food.” Dr Van Sebille is using the NeCTAR Research Cloud to host http://www.adrift.org.au a research tool 'Adrift' to explore how objects drift through the ocean.
You are here
What can we learn from Canada's Research Cloud journey?
John Shillington shares his journey implementing a research computing cloud with zones in Canada, the US and China. John is technical architect for the Cloud Enabled Space Weather Modeling and Data Assimilation Platform (CESWP) project. John highlights the value of Cloud computing for Australian researchers and the efficiencies it provides. He talks about some of his challenges setting up the cloud, meeting the NeCTAR team and visiting Australia earlier this year.
1. What are your views on the value, opportunities and challenges that cloud technology provides for researchers?
That's a small question with a big answer ... even so, my answer still seems too long!
Cloud is such an overused word, I think it's important to clarify Cloud Computing and in this case I'm talking specifically about Infrastructure as a Service (IaaS). In other words, services like Amazon's Elastic Cloud Computing (EC2), which allow a user to ask for a virtual machine (VM) with given specifications (for example, RAM, number of cores, attached disk), running a particular operating system and software, and then have it delivered for use on demand, with no operator intervention and no queuing.
For example, I could ask for a small VM with 1GB RAM, 1 core, and 50GB of attached disk, and I'd like it to run an instance of Ubuntu, preloaded with MatLab. I can do this through a dashboard in a browser, on a command line, or in a script within my code. The cloud service then tells me, more or less immediately, whether or not there are enough resources available to satisfy my request. If there are, it will automatically spin up the instance that I requested and hand me back the IP address and security credentials necessary to access it. If it is successful, my VM will be ready to go within a few minutes maximum. When I'm done with it, I terminate it and the resources go back into the cloud pool.
Now let's imagine that we build some interfaces to our IaaS cloud to make it easy for researchers to access the kinds of machines that they need for their research.
Space physics researchers from anywhere in the world will be able to go to the CESWP portal and ask for VMs specifically tailored to their research needs, going from thought to action in a matter of minutes. In a more traditional scenario, this could take hours, days or weeks, depending on access to hardware and software. Suddenly the effort required by the user to procure a research environment has been reduced tremendously, freeing up that time for meaningful research. And this is just the tip of the iceberg -- there are many other opportunities that span the whole cycle of research, from initial exploration to collaboration with other researchers, to archiving virtual experimental labs to support provenance. In short, I think Cloud Computing has the capacity to transform the way research is done, reducing the administrative and logistical friction and freeing scientists to spend more of their time focused on their investigations.
In terms of challenges, there are still many: cloud resources are neither free nor ubiquitous, and there will inevitably be either contention for resources or funding challenges if resources are charged for on a pay-per-use basis. Private IaaS clouds are still evolving very rapidly, which is both good and bad.
It's good because they are constantly improving at a remarkable rate. It's bad because if you expect them to be perfect and mature now, you will likely be disappointed. Having said that, our experience has been that they have reached a level of maturity that makes them extremely useful, so it is better to jump and get started now rather than later -- much like how CANARIE (www.canarie.ca), Compute Canada (www.computecanada.org) and Cybera are involved with the Digital Accelerator for Innovation and Research (DAIR) (www.canarie.ca/en/dair-program/about) national cloud platform in Canada, and the way that NeCTAR is in the process of providing for Australian researchers.
2. When setting up the Canadian cloud for Canadian Research communities – what were some of your biggest challenges? How did you overcome your challenges – was there a strategy or approach that worked better than other approaches?
One of the biggest challenges was struggling with the state of the art in Open Source IaaS clouds. When we started the CESWP project, we chose one IaaS cloud platform that was leading the pack and showed tremendous thought-leadership and promise. But after a year of mixed results, we realized that the product just wasn't coming along fast enough to provide a platform that was stable enough for us to confidently offer to our researchers.
So, after careful investigation and much trepidation, we took the dramatic step of switching to OpenStack ( http://www.openstack.org/ )as our IaaS cloud platform. Believe me, this wasn't a move we took lightly! We were very fortunate -- by design -- to be able to replace the cloud platform with minimal disruption to the software that we had previously written to use the cloud, and utilizing the same hardware as before. Our experience to date with OpenStack -- also chosen for the DAIR cloud and by NeCTAR -- has been extremely positive. It has scaled very well and has been stable and reliable.
Another challenge we faced was dealing with multiple physical sites for our different cloud zones, each of which has different hardware, different operations staff and different policies.
Despite the hard work required to get things set up initially -- much of it logistical and team-building -- once the underlying infrastructure was in place, the actual operation of the multi-zone cloud has been remarkably smooth so far.
3. What can Australians, and particularly the Australian Nectar project, learn from your project?
When I visited the Victorian Partnership for Advanced Computing (VPAC) in March, their Chief Executive Officer Bill Appelbe kindly arranged for me to give a talk on our experiences building a research cloud in Canada. While everyone in the audience seemed interested, I noticed two people at the back who seemed very interested.
They kept nodding when I was talking about some of our experiences and I got the impression that they really understood what I was getting at.
Afterwards I had a chance to meet them and it turned out they were working on the NeCTAR project (it was Glenn Moloney, Director of NeCTAR and Tom Fifield, Research Infrastructure Architect on the NeCTAR project). Glenn generously invited me to attend the NeCTAR workshop the following week and I quickly realized that Cybera's cloud work had an enormous amount in common with the NeCTAR project.
In the short run, we've been able to share a lot of our experiences and learnings with the NeCTAR team, from high level concepts to code-level details. In the long run, we fully expect to learn as much from the NeCTAR team as they can learn from us. We see the relationship as mutually beneficial, very high value, and right in line with Cybera's mandate to act as a catalyst for evolving cyberinfrastructure technologies and spurring innovation and economic development.
4. What opportunities do you think may occur in the future to collaborate between the Australian NeCTAR project and your Canadian Research Cloud?
One of our senior cloud developers is going to spend a couple of weeks in Melbourne working with the NeCTAR team during their first implementation of OpenStack. The NeCTAR team will of course be in full control and following their own script, but our cloud developer will be on-site to assist and share his experience and knowledge to make the process smoother. At the same time, I'm sure he'll be learning a lot from the deep systems and architecture experience of the NeCTAR team.
In October, we are hosting Glenn as a speaker at our Summit in Banff -- Data For All: Opening Up The Cloud (www.cybera.ca/summit2011) -- on October 6 and 7 and we hope that other members of the NeCTAR team will attend as well to further share and exchange experiences.
In addition, we are treating the work that we are doing as Open Source, with the goal of sharing, giving back, and being active participants in the community. We hope that some of these tools and documentation will be of value to the NeCTAR project, and perhaps the NeCTAR team may consider also sharing some of their work as the project evolves.
5. Would you like to comment on your experiences setting up your Research Cloud programs? Or, comment on any highlights in your career to date?
I've been with Cybera for just under two years now, and it has been a very exciting time.
We have worked on a number of cloud-based initiatives, including CESWP, DAIR, and a Virtual Computing Lab (VCL) pilot project that is just getting under way. We've been extremely fortunate to have some very strong and mature developers on our team, and to work with very supportive and forward-looking collaborators at CANARIE, the University of Alberta, the University of Sherbrooke, and Compute Canada, to mention just a few. Although it hasn't been without its challenges, we have had an enormously supportive group of people involved who have helped us overcome all of the bumps in the road. Going forward, I think it is important to continue to build our collaboration with other like-minded groups such as the NeCTAR project -- we all have a lot to learn from each other. That was part of our inspiration to launch a Tech Radar (http://cybera.ca/tech-radar) blog on the Cybera website, to share insights and build a sense of community with fellow cloud developers. Finally, as I said before, I think cloud computing has tremendous potential, among other things, to transform research, making it possible for scientists in a variety of disciplines to spend much more time researching and much less time struggling with technological friction.