Eucalyptus, QA

Extracting Info From Euca’s Logs

logstashIntroduction

Throughout my tenure as a Quality Engineer, I have had a love/hate relationship with logs. On one hand, they can be my proof that a problem is occurring and possibly the key to tracking down a fix. On the other hand, they can be an endless stream of seemingly unintelligible information. In debugging a distributed system, such as Eucalyptus, logging can be your only hope in tracing down issues with operations that require the coordination of many components.

Logs are generally presented to users by applications as flat text files that rotate their contents over time in order to bound the amount of space they will take up on the filesystem. Gathering information from these files often involves terminal windows, tail, less, and timestamp correlation. The process of manually aggregating, analyzing and correlating logs can be extremely taxing on the eyes and brain. Having a centralized logging mechanism is a great leap forward in streamlining the debug process but still leaves flat text files around for system administrators or testers to analyze for valuable information.

A month or so ago I sought out to reinvigorate my relationship with logs by making them sexy again. I looked around at the various open source and proprietary tools on the market and decided to give Logstash a shot at teaching me something new about Eucalyptus through its logs. The “getting started” links I found on the docs page presented a quick and easy way to see what LogStash could do for my use case, namely ingesting and indexing logs sent from rsyslog. Once I got some logs to appear in the ElasticSearch backend, I got a bit giddy as I was now able to search and filter the logs through an API. But alas! I was still looking at text on a freaking black and green screen. BORING! There had to be a better way to visualize this data.

I looked around a bit and found Kibana. This beautiful frontend to ElasticSearch gives you a simple and clean interface for creating/saving dashboards that reflect interesting information from your logs. Within minutes of installing Kibana, I had a personalized dashboard setup that was showing me the following statistics from my Eucalyptus install that was undergoing a stress test:

  • Instances run
  • Instances terminated
  • Volumes created
  • Volumes deleted

I had proven that there was value in using Logstash and it was not complicated to setup or use. I then began to use other dashboards, filters, and search terms to look for anomalous patterns in the log messages. This type of analysis resulted in a couple of issues being opened that I would not have found looking at one screen of text at a time.

Below I will outline the steps to begin your own Logstash journey with Eucalyptus or any other system/application that logs to a filesystem on a Linux box.

Installation

Installing Logstash

  1. Install packages
    • On Ubuntu: 
      apt-get install default-jre git apache2 ntp
    • On CentOS:
      yum install java-1.7.0-openjdk.x86_64 git httpd ntp
  2. Set proper timezone
    1. Ubuntu
    2. CentOS
  3. Download Logstash
    • wget https://logstash.objects.dreamhost.com/release/logstash-1.1.13-flatjar.jar -O logstash.jar
  4. Create LogStash config file for rsyslog input. Create and edit a file named logstash.conf
    • input {  syslog {    type => syslog    port => 5544  }}
      output {  elasticsearch { embedded => true } }
  5. Run logstash JAR
    • nohup java -jar logstash.jar agent -f logstash.conf &
  6. Configure rsyslog on Eucalyptus components by adding the following to the /etc/rsyslog.conf file and replacing <your-logstash-ip>
    • $ModLoad imfile   # Load the imfile input module$ModLoad imklog   # for reading kernel log messages
      $ModLoad imuxsock # for reading local syslog messages
      $InputFileName /var/log/eucalyptus/cloud-output.log
      $InputFileTag clc-sc-log:
      $InputFileStateFile clc-sc-log
      $InputRunFileMonitor
      $InputFileName /var/log/eucalyptus/cc.log
      $InputFileTag cc-log:
      $InputFileStateFile cc-log
      $InputRunFileMonitor
      *.* @@<your-logstash-ip>:5544
  7. Restart rsyslog
    • service rsyslog restart

Installing Kibana 3

  1. Clone the repository from GitHub
    • git clone https://github.com/elasticsearch/kibana.git
  2. Edit the kibana/config.js file and set the elasticsearch line to:
    • elasticsearch:    "http://<your-logstash-public-ip>:9200", 
  3. Copy the Kibana repository to your web server directory
    • CentOS:
      mv kibana/* /var/www/html/; service httpd start
    • Ubuntu:
      mv kibana/* /var/www/

Point your browser to http://<your-logstash-public-ip&gt; and you should be presented with the Kibana interface. Kibana is not specifically a frontend for Logstash but rather a frontend to any ElasticSearch installation. Kibana does provide a default Logstash dashboard as a starting point for you customizations:  http://<your-logstash-public-ip>/index.html#/dashboard/file/logstash.json

Standard
Eucalyptus

Getting Started with EucaLobo

Initial Setup

In my previous post, I described the story behind EucaLobo, a graphical interface for managing workloads on AWS and Eucalyptus clouds through a <cliche>single pane of glass</cliche>. The tool is built using Javascript and the XUL framework allowing it to be used on Linux, Windows, and Mac for the following APIs:

  • EC2
  • EBS
  • S3
  • IAM
  • CloudWatch
  • AutoScaling
  • Elastic Load Balancing

To get started download the binary for your platform:

Once installation is complete and EucaLobo starts for the first time you will be prompted to enter an endpoint. My esteemed colleague Tony Beckham has created a great intro video showing how to create and edit credentials and endpoints. The default values have been set to the Eucalyptus Community Cloud, a free and easy way to get started using Eucalyptus and clouds in general. This is a great resource for users who want to get a feel for Eucalyptus without an upfront hardware investment.

Enter the following details if you have your own cloud or would like to use AWS:

After entering an endpoint, the next modal dialog will request that you enter your credentials:

  • Name: Alias for these credentials
  • Access Key
  • Secret Key
  • Default Endpoint: Endpoint to use when these credentials are activated
  • Security Token: Unnecessary for most operations

Any number of endpoints and credentials can be added which makes EucaLobo ideal for users who leverage multiple clouds (both public and private). Once you have loaded up at least one endpoint and credential set, you need to:

  1. Go to the “Manage Credentials” tab
  2. Select a credential in the top pane
  3. Click the “Activate” button

You are now ready to start poking around the services available through EucaLobo. All services are listed on the left pane of the interface. Clicking on the name of the tabs will take you to the implementation of that functionality. The ElasticWolf team did a great job of making an intuitive and simple interface to navigate. As an enhancement, which I hope to get upstream soon, I have added labels to all buttons in the UI so that it is clear which operations will be executed.

Cool Features

Portability

ElasticWolf leverages the XUL framework which enables developers to write their application once and deploy it on Mac/Linux/Windows or any platform that supports Firefox. This level of portability is great to cover a large number of users with minimal effort. So far I have not found any bugs that are platform specific.

Multi-cloud

EucaLobo makes it easy to quickly change endpoints and credentials. My common use cases for this feature are:

  • Switching only endpoints – Switching regions in AWS
  • Switching both endpoint+credentials: – Verifying Eucalyptus behavior after testing in the same interface as AWS
  • Switching only credentials – Use different users to validate IAM behavior

multi-cloud

IAM Canned policies

One of the great workflows inherited from ElasticWolf is the ability to use pre-canned policies when associating a policy to users and groups.

canned-policy

Security features

You may be thinking that adding cloud credentials to an application and leaving it open on your desktop is too risky. You would be absolutely correct. To combat this risk, you can set an inactivity timer that will either exit the application or require the user to enter a preset password. The granularity of the timer can be set to as low as 1 minute.

security

S3 advanced features

One of the most powerful features in the S3 API is the ability to lock down (or open up) S3 entities (objects and buckets) using an ACL policy language. Unfortunately, the S3 ACL API is not the most user friendly. With the ACL implementation in EucaLobo, you can choose to share a file publically or share with only 1 or more individual users.

s3-acls

CloudWatch Graphs

The reason I began my efforts to get ElasticWolf working with Eucalyptus was in order to use it as an interface to the newly developed CloudWatch API in Eucalyptus. EucaLobo makes it extremely easy to visualize the usage of each of your instance, volumes, load balancers, and AutoScaling groups.

cloudwatch

Conclusion

EucaLobo has been extremely useful for me during the testing of Eucalyptus 3.3, as well as for managing my home private cloud and AWS accounts. I hope that others can find it as useful and useable as I have. With what I have learned during the development of EucaLobo, I hope to refork ElasticWolf  so that I can make a smaller patch upstream for enabling Eucalyptus Cloud support.

Please dont hesitate to provide feedback in the form of comments on this blog, on Github as issues, or on IRC at the #eucalyptus-qa channel of Freenode. As always pull requests are welcome: https://github.com/viglesiasce/EucaLobo

Standard
Eucalyptus

The Journey to EucaLobo

The 3.3.0 feature barrage

As a quality engineer it is always useful to have an at-a-glance view of the state of your system under test. Unfotunately, having reliable graphical tools is not always possible during testing phases as the UI is often trailing the development of core features. During the 3.3.0 release, the Eucalyptus development team added an incredible amount of API calls to its already large catalog of AWS compatible operations:

  • Elastic Load Balancing
  • Autoscaling
  • CloudWatch
  • Resource Tagging
  • Filtering
  • Maintenance Mode
  • Block Device Mappings

As a result of this onslaught of new service functionality from developers the UI and QA teams had their work cut out for them. The UI team had decided early on that they needed to make some architectural changes to the UI code, such as leveraging Backbone.js and Rivets. This meant they would only be able to cover the newly added resource tagging and filtering services within the 3.3.0 timeframe. Unfortunately, the UI was not the only client tool that needed to implement new services as Euca2ools 2.x was also lacking support for ELB, CloudWatch, and Autoscaling. As we split up the services amongst the quality engineers it became apparent that we had an uphill battle ahead and would need every advantage we could get. I took the lead for the CloudWatch service and began my research as soon as the feature had been committed to the release. In reading about and using the AWS version of CloudWatch it became clear that the service basically boiled down to:

  1. Putting in time series data
  2. Retrieving statistics on that data over a given interval at a set periodicity

Having worked with time series data before, I knew that without a way to visualize it I would be seriously hindering my ability to verify resulting metrics. I pulled out my handy recipe for Graphite and wrote a simple bash script that would grab a CloudWatch data set from a file and send it to my Graphite server using netcat. This worked as a quick proof of concept that we were storing the correct data and computing its statistics properly over longer periods. One of the major functionalities that is provided by the CloudWatch service is instance monitoring. This data allows users to make educated decisions about how and when to scale their applications. The realtime nature meant that I needed to be able to create arbitrary load patterns on instances and volumes and quickly map that back to CloudWatch data. It became clear that a bash script pulling from a set of text files was not going to be simple or flexible enough for the task.

Let the hacking begin

As I began looking around for CloudWatch visualizers, it was clear that not many people had attacked the problem, likely because the AWS Console implementation is solid. One project that almost immediately bubbled to the top, however, was ElasticWolf, the AWS console developed for use with GovCloud. This project had been around for a year or so and had managed to implement a graphical interface for every single service that AWS supported, including AutoScaling, which is still not found in the AWS Console. It seemed like it would not take much time to point the ElasticWolf interface at my Eucalyptus cloud, so I took a stab at the Javascript code that backs the XUL application and ended up with a working version within 24hrs.  This timeline from cloning my first repo to using EucaLobo as my daily driver is a testament to the API fidelity that Eucalyputs provides.  At that point, I had hardcoded many things in the code that made it no longer work with AWS, fortunately at the time hybrid functionality was irrelevant.  A few weeks later when I had a better idea of how the code was structured and how I could manipulate the UI elements, I was able to reimplement the credential and endpoint management such that it would allow hybrid functionality. This was another great advantage for our team in that we could now run the exact same operations on both AWS and Eucalyptus and compare the results through the same interface. ElasticWolf was also quite useful in defining the workflows that were common to the new services we had implemented. For example, its UI will ensure that there are launch configurations created before you attempt to create an autoscaling group. These types of guard rails allowed us to efficiently learn and master the new features with a low barrier to entry in order to deliver a high quality release within our schedule.

In my next post I will show how to get started with EucaLobo as well as highlight some of its features.

Standard
Eucalyptus, QA

Introducing Micro QA

MicroQA-homepage

I have devoted my last 2 years to testing Eucalyptus. In that period the QA team and I have gone through many iterations of tools to find those that make us most efficient. It has become a never ending and enjoyable quest.

We have evolved our testing processes through the following stages:

  1. Using command line tools exclusively
  2. Writing scripts that call command line tools and parsing their output
  3. Writing scripts using a library to make test creation easier and more efficient without the need for command line tools
  4. Running scripts through a graphical tool in order to make test execution more flexible and simple

Each of these iterations was fueled by some tool chain that came along to solve a problem. My journey at Eucalyptus started between the second and third stages. Euca2ools were the go to favorite for manual testing and there was a library aptly named ec2ops floating around that wrapped euca2ools commands using Perl. Being that boto was backing euca2ools at the time I figured I would take a stab at  creating a library calling boto directly that we could start to build our tests with, and thus Eutester was born entering us into the third phase. Once we had reached this point, we were able to quickly write and execute idempotent tests from the command line. After manual execution of our tests was passing consistently we were able to parametrize, run, and parallelize our tests using Jenkins. At this fourth phase we have taken a snapshot of our environment and are now able to share it with the rest of the community through our image catalog.

A few of the use cases Micro QA can help with are:

  • Regression testing during development
  • Functional testing after initial installation
  • Load and stress testing before going into production
  • Development platform for Eutester test cases

Benefits for users of Micro QA include:

  • Known working automated tests
  • Constantly increasing number of test scenarios
  • Flexibility to add custom tests as needed for use cases which aren’t covered

How it works

The environment starts with an Ubuntu Precise Guest Image from Ubuntu’s cloud-images. Once downloaded, registered and started, I installed the Jenkins package. After Jenkins was up and running I installed a redirect for port 8080 to 80 so that users would not need to remember a port in order to access their Micro QA image and could simply hit: http://<instance-public-ip&gt;.  Once the Jenkins instance was reachable by the standard HTTP port (80) we began to add the dependencies for Eutester to the guest OS. The typical environment for Eutester scripts requires boto, paramiko, and virtualenv to isolate the script runtime environment. Once the Python dependencies were successfully installed into a virtualenv we then setup our projects in the Jenkins install. The jobs for Eutester and Eutester4j were then created with only a single required parameter, namely the contents of the eucarc file generated by the cloud. Each script checks out its own environment so that both development and stable Eutester versions can run side by side.

Installation

In order to install the Micro QA image follow the instructions here: https://github.com/eucalyptus/micro-qa/blob/master/README.md

Usage

  1. Once at the main page of the Micro QA instance, click the “Build” button on the right side of the Instance Suite project.
  2. Enter your eucarc file into the text box on the next screen
  3. Hit build at the bottom of the page
  4. You will be taken to the currently running job. Click the blue bar under the job timestamp on the left.
  5. Here you will see the console output for the test run.
  6. When the script completes, the bottom of the console output will display a summary of the results.
Standard
Eucalyptus

Using Scalr for Automation of your Eucalyptus Cloud

Introduction

scalr_logo

 

Euca logo

I have been using Eucalyptus heavily (as a quality engineer it is my day to day) for the past 1.5 years. I know the ins and the outs of system and am constantly tracking new features and bug fixes that arrive. With this knowledge it makes me a prime candidate to find out how other pieces of the cloud story can integrate with Eucalyptus.

I run a small cloud at home that I use for development and testing of different software stacks. Some of the tools that I’ve learned to use and hack on since Ive turned up my cloud include Graphite, Gitorious, Jenkins, Testlink, Zenoss. The issue with getting most of these (and any) open-source tools running is that they often require a very particular  base OS and dependency versions in order to install cleanly. This makes Eucalyptus a great tool for figuring out the right/easiest way to deploy an open source tool as it allows users to create and save any particular stack they have created for later use. Another great part about using Eucalyptus as a development tool is that it allows any and all distros to be loaded into a cloud and available for use by many users. When I look into a new tool to use I can rest assured that I can find one of their supported distros in my Eucalyptus cloud. Currently I have images registered for Debian 5/6, Ubuntu 12.04/12.10/13.04, Fedora 17, CentOS 6, and begrudgingly Windows 7 (99% used to manage vCenter nodes yuck).

There are many ways to populate and provision application stacks in a cloudy model. One way would be to load all of the application stack onto an image then re-bundle it and save it off. Another approach, and the one that I rely on, is to populate base images that can then be provisioned by scripts in order to run the application of choice. In using this model I had come up with many different scripts/user-data to populate the images listed above with applications. My approach allowed for apps to be deployed easily but I ended up with lots of replicated code across the user-data that I was passing to instances. Another pain was that it required me to remember which user-data scripts belonged with which images. Although not impossible to turn up and down my apps, it was definitely cumbersome and certainly not push button.

Enter Scalr.

Scalr allows me to populate scripts into a common database that is shared across all images/clouds. Because Scalr allows multiple scripts to be run during instance turn up (and manually for that matter), I am now able to modularize my scripts to reduce the amount of code I need to maintain.  Another element of Scalr that makes my life easier is its ability to auto scale an application. For my purposes I am not scaling an app to more than 1 instance, but if I ever bring down an instance erroneously Scalr will automatically repopulate the instance and run the scripts necessary to reconstruct the app. The final efficiency I get out of using Scalr is that I can load up multiple clouds and use the same scripts against similar base images within various clouds. As a tester I bring up and down at least 1 private cloud a day, on top of using my AWS account for both testing and production use, so this may be the last feature but certainly not least in my mind.

Eucalyptus+Scalr is a developer/tester’s dream, so lets get down to business on how to get this solution running on your own cloud.

Install Scalr

  1. Run the script found here on a Ubuntu Precise 12.04 server or instance: https://gist.github.com/4527791
  2. You will be prompted a few times to enter the password listed in the script (default is yourpassword)
  3. Ensure that the default virtual host (scalr.local) can be resolved by either adding a DNS entry to the server or a host entry on your client. Default hostname can be changed on lines 4-5 in the script from step 1.
  4. Login to Scalr as admin/admin: http://scalr.local
  5. Click on Accounts in the top left corner and create a user account
  6. Click Settings->Core Settings, and set the “Event handler URL” at the bottom to the hostname you chose in step 3 (default is sclar.local)
  7. Logout and back in with your user credentials created in step 5

Scalrize Your Images

You need to install an agent, Scalarizr, on any  any images used to create server templates, or in Scalr parlance roles. After installing the Scalarizr agent you will register each of your images as a role. To install Scalarizr:

  1. Install the repository package as appropriate for your distro: Debian  RHEL/CentOS
  2. Install the Scalarizr package appropriate to the cloud that will be used. In our case “scalarizr-eucalyptus”
  3. Ensure that your image can properly resolve the “Event Handler URL” entered when installing Scalr
  4. Rebundle and register the image with Eucalyptus

Register Your Eucalyptus Cloud

Scalr works with many cloud providers including AWS. They were able to leverage a good amount of  client code  in order to support Eucalyptus. It would seem that the last time that the Eucalyptus integration was looked at was in the Euca 2.x timeframe so many things that Eucalyptus now has full support for with 3.1+ (EBS backed images for example) are not supported. Other missing functionality is support for keypairs (which can be patched using scripts in Scalr) and all instances are launched in the default group (not sure what the reason for this is).

In order to setup Scalr with your cloud credentials:

  1. In the top left of the navigation bar click “default”, the “Manage”
  2. Click “Actions” on the right, then “Configure”
  3. Set your timezone
  4. Click the Eucalyptus logo
  5. Click the green plus sign to add a new Eucalyptus cloud

sclar-add-cloud

Creating a Role

Scalr uses a paradigm for cloud automation that requires that cloud images be registered as Roles. These roles are then added to Farms in order to deploy an app. Each role can only appear once per Farm. Scalr allows you to catalog, version and deploy scripts using a templating mechanism where values can be set at the Role, Farm, or individual server level.

Some common role types are:

  1. Base Images
  2. Load Balancers
  3. Web Servers

In order to create a Role:

  1. Go to this page, replacing the hostname if necessary: http://scalr.local/#/roles/edit
  2. Once there, enter a name for the role, for example: Ubuntu Precise Base
  3. Click on the check box that represents what category of role this is, in our case we will check the checkbox for Base
  4. Click over to the Images tab and enter the information about the image you registered in the steps above
  5. Enter the image info including its machine image ID
  6. Click Add (left side) then Save (bottom center)

Scalr - Roles

Adding your scripts

One of the most powerful parts about Scalr is the ability to write and reuse templated scripts. Scalr also allows you to share, fork, and version your scripts. Scripts can be added to Roles, Farms or individual servers for execution at boot, termination or manually during any part of an instances’ lifecycle. Creating, managing, and deploying scripts allows you to work around some shortcomings of the current Scalr/Eucalyptus integration like only being able to use the default security-group and no keypair being passed to instances launched through Scalr.

  1. Go to http://scalr.local/#/scripts/view in order to add some basic scripts 
  2. Click the green plus sign on the right of the Scalr Web Console
  3. When adding scripts you will need to give them a name and a description along with the actual code
  4. The first script we will add will be called “SSH Key Inject” and ensure that our SSH key can be added to an instance: 
    #!/bin/bash
    echo %ssh_key% > /root/.ssh/authorized_keys
    
  5. The next script we will add will install Graphite, a scalable realtime graphing application and can be found here

You will notice that the script added in step 2 uses a wild card parameter, %ssh_key%, that can be configured at script run time, role provisioning or farm launch time. Once we’ve created our scripts its time to create our farm which will pair our scripts with our images, or Roles, so that we can run and terminate our application at will.

Scalr - Scripts

Creating a Farm

Farms are collections of roles that constitute a single deployment. All servers in a farm can be turned up and down in unison in order to deploy an application. Individual roles within a farm can be autoscaled. When Scalr notices less than a certain threshold of servers in a particular role, it will automatically launch more servers. As an example Farm, I will be showing how to deploy Graphite (Scalable Realtime Graphing).

  1. Add a role using the procedure above that references a Scalarized Precise Base Image 
  2. Click over to the Farms pane in the Scalr web interface
  3. Click the green plus sign on the right of the Scalr Web Console
  4. Enter a name for this farm, in our case: Graphite
  5. Click over to the Roles tab and then click on Roles Library
  6. Click the plus sign next to Base Images
  7. Click on the icon for the Precise Base Image, then click the green plus sign.

Now that we have added the Base Image role to the Farm we will need to add scripts to the Role that run once the instance is up and running.

  1. Click on the icon that has now appeared towards the top of the page in order to configure what scripts we will run on this Role in the Farm
  2. On the bottom left click Scripting. In this configuration page we can choose which scripts to run, in which order and at what times
  3. Under the “When” dropdown click “Host Up”, then under the “execute” drop down choose the “SSH Key Inject” script we added in the steps from above. Once those two options have been chosen, click the green plus sign
  4.   Click on the row that has shown up for that script
  5. On the right side of the pane:
    1. Ensure that “Where” is set to “All instances of this role”
    2. Under the parameters section at the bottom of the pane, set which public key we want to inject for this role
  6. Add our second script “Install Graphite” at “Host Up”, ensure that “Where” is set to “All instances of this role”
  7. Click save at the bottom of the interface

Once we have defined what roles and scripts are paired up to make our application we can launch our Farm.

  1. Go to the Farms interface: http://scalr.local/#/farms
  2. Click Actions on the right side page
  3. Click Launch

Scalr will now launch your image and run the desired scripts to build out the application. You can use this interface to turn up and down your applications as necessary. If Scalr notices that your app is no longer running on the cloud it will be automatically relaunched for you. I have used this feature many times to help me with disaster recovery. Once I have an application running properly in my cloud I ensure that I can terminate any individual instances and my Scalr configuration is setup to properly rebuild the app.

scalr-farms

Standard
Eucalyptus, QA

Creating a Eucalyptus Test Harness: Jenkins,Testlink and Eutester

Introduction

This post will lead you through the creation of an environment to run automated test plans against a Eucalyptus installation. The tools used were described in my previous post, namely Jenkins (orchestrator), Testlink (planning & metadata), Eutester (functional testing framework). The use case we will be attacking is that of the Developer QA system sandbox.

The only requirements for this setup:

  1. Your machine can run Jenkins
  2. Python 2.6 or greater installed
  3. Python virtualenv is installed (easy_install virtualenv)
  4. Git is installed

Once complete you should be able to run test plans against a running cloud with the click of a button. This has only been tested on my Macbook but should work on any platform that can meet these requirements.

Getting setup

Installing Jenkins

The first step in getting this system up and running is to download and install Jenkins on the desired host machine. When completing this exercise I was using my Mac laptop which led me to use the installer found on the right side of this page: Download Jenkins. Further installation instructions for Linux and Windows can be found here: Installing Jenkins.

Once the installation is complete, point your we browser to port 8080 of the machine you installed Jenkins on, for example http://yourjenkins.machine:8080.  You should now be presented with the main Jenkins dashboard which looks like this:

Installing Jenkins Plugins

The setup we are trying to achieve requires that you install a Jenkins plugin. Luckily for us Jenkins makes this installation a breeze:

  1. Go to your Jenkins URL in a web browser
  2. Click Manage Jenkins on the left
    •  
  3. Click Manage Plugins
  4. Click the Available tab
    •  
  5. Install the following  required plugin by clicking the checkbox on the left of the row on all of them and then clicking “Download now and install after restart” at the bottom of the page
    • Testlink Plugin
  6. Once the installation is complete click the checkbox to restart Jenkins when there are no jobs running (ie now).

Configure Testlink Plugin

In order to access standardized test plans that are implemented by Eutester scripts and cataloged by Testlink we need to setup the Jenkins Testlink plugin as follows:

  1. Click Manage Jenkinson the left
    •  
  2. Click Configure System
  3. Scroll down to the Testlink Section and click the Add button to configure a new Testlink installation to use when pulling test plans and test cases
  4. Configure the options presented as follows:
    • Name: Euca-testlink
    • URL: http://testlink.euca-qa.com/testlink/lib/api/xmlrpc.php
    • Developer Key: 7928ee041d5b20ce3f356992d06ab401
  5. Click Save at the bottom of the page

Now that we have Jenkins and its Testlink plugin configured we need to add a job that uses this functionality. For users not wishing know the internals of the job, instructions follow for importing the pre-built job. For the advanced/curious user the internals of the job are explained in another blog post.

Importing the Pre-configured Job

  1. Go to  your Jenkins home directory (Mac: /Users/Shared/Jenkins/Home, Linux: /var/lib/jenkins)  in a terminal
  2. Create a directory called testlink-testplan in the $JENKINS_HOME/jobs directory
  3. Copy this config file to that directory: Job Config
  4. Change the ownership of the entire testlink-testplan directory to the jenkins user:
    • chown -R jenkins:jenkins $JENKINS_HOME/jobs/testlink-testplan
  5. Go to the Manage Jenkins page then click Reload Configuration from Disk
  6. Upon returning to the main dashboard you should see your job ready to run.
  7. You can now skip down to the section below entitled Running and Viewing Your Test if you do not care to know the internals of the job.

Running and Viewing Your Test

Running the test

  1. In order to run the test job we have just configured return to the Jenkins main dashboard by clicking Jenkinsin the top left corner of the Jenkins page.
  2. Click the Build Now button on the left of the testlink-testplan job 
  3. Ensure that parameters are set properly. The defaults should work other than the topology which will require the IPs of the machines you are trying to test against.

Viewing the test

  1. On the left side of the screen you should see the Build History which should now show at least 1 job running. Look for a build with a blue progress bar under it.

  2. If you would like to see the console output of the currently running test click the blue progress bar.
  3. If you would like to see the main page for the build click the date above the blue progress bar. This screen shows a summary of the test results and any archived artifacts when the build has completed.
  4. The trend button will show a plot of the test duration over time.

You can rinse and repeat as many times as you’d like with this procedure.

Testcases and Testplans

Currently the only testplan available is CloudBasics but soon more testcases and plans will be added. I will make sure to update this blog post with the latest list.

Standard
Eucalyptus, Uncategorized

Bridging the Gap: Manual to Automated Testing

Introduction:

Manual testing is a key element to a hardened QA process allowing for the human mind to be used in tasks that lend themselves to critical thinking or are simply not automatable such as:

  • Exploratory (also called ad-hoc) which allows testers to test strange and unusual codepaths
  • Use Case which intends to simulate as closely as possible the end to end solution that will be deployed
  • Usablility which aims to test whether the product meets the timing, performance, and accessibility needs of the end user

The only way to have enough time to do these kinds of tests (my favorite is exploratory) is to automate as much of the functional and regression testing as you can. Once you know a proper and reliable procedure for producing a functional test it is time to stick that in a regression suite so that it no longer needs to be run by a human.

At Eucalyptus, we have a great system for provisioning and testing a Eucalyptus install (developed by Kyo Lee) which is currently in its third iteration. As new features are added and as the codebase expands, we have identified the need to tie this system into others in order to make a more fluid transition of our manual tests into the automation system. Currently we use three main systems for evaluating the quality of Eucalyptus:

  • Jenkins to continuously build the software
  • Testlink (an open source test plan management tool) to maintain our manual tests
  • Automated provisioning system to run regression suites against the code base.
The key to the integration of these three existing systems will be the use of Jenkins to orchestrate jobs, Testlink to hold test meta-data, and the existing automated system for provisioning a Eucalyptus install.  The flow of adding a test case into the automation system will be as follows:
  1. Upload your test script (hopefully written with Eutester) to a git repository.
  2. Create a testcase in Testlink that defines the procedure and any necessary preconditions. If the case to be automated is already in manual testing, this can be skipped.
  3. Add info to the testcase that describes where your new test script lives.
  4. Add the test case to a testplan through Testlink.
Once the test case has been entered to a testplan, there are two things that can happen:
  • If the test was added to the continuous integration suites, it will automatically run against all of our platforms
  • If the test was added to a personal testplan, it can be kicked off manually using the Jenkins UI

With this flow defined lets take a look at  how the individual pieces will interface.

Interfaces:

Jenkins to Testlink:

Testlink is a flexible tool that allows testplans with great complexity to be organized efficiently. Although this on its own makes Testlink my top choice for multi-platform manual test organization, the real secret sauce comes from the API that it presents.  The API gives you programatic access to the metadata of any testplan by using an XML-RPC interface. In order to access this API through Jenkins, I could hack together a set of Python scripts to speak Testlink but instead all that work has been done for me in the Testlink Plugin. This plugin allows me (as part of a Jenkins “build”) to pull in a testplan with all of its testcases and their respective custom fields and execute a build step/script for each case.  In order to take advantage of this I have added a few crucial custom fields to each test case in Testlink:

  • Git Repository
  • Test Script Path
  • Test Script
  • Script Arguments
With these few pieces of information we can allow anybody with a reachable git repo to add a testcase that can be run through automated QA. The testcase in testlink will look like this:
The script I would use against each testcase would look something like:

In the above build step I am simply cloning the repo that the testcase exists in, then running the script with the prescribed arguments. This gives the test case creator enough extensibility to create complex testcases while standardizing the procedure of submitting a test case to the system. Another advantage of including a configurable arguments field is that the same script can be parameterized to execute many test cases.

Jenkins to QA Provisioning

Our current QA system has recently been upgraded to include a user facing API for deploying Eucalyptus against pre-defined topologies, network modes, and distributions. In order to hook into this system we will be using Python and the json library to parse the data returned from the QA system. The types of calls we will be issuing are:

  • Start provisioning
  • Wait for provisioning to complete
  • Free machines used in test
These steps will each be thrown into a Jenkins job that will allow us to re-use the code across many other jobs using the Parameterized Build Trigger Plugin.

Use Cases:

In this blog post we have shown how this integration can be used to quickly move a test from being manually tested to being automated but in order for this integration to be a true success it must deliver on at least these other use cases which will be described in further posts:

  • Continuous Integration
  • Test Report Generation
  • Developer QA system sandbox

Would you like to help us continue to innovate our QA process? We are looking for talented engineers to work with us on the next generation of cloud testing. Click here to apply

Standard