Eucalyptus, QA

EucaLoader: Load Testing Your Eucalyptus Cloud

Locust-full-page

Introduction

After provisioning a cloud that will be used by many users, it is best practice to do load or burn in testing to ensure that it meets your stability and scale requirements. These activities can be performed manually by running commands to run many  instances or create many volumes for example. In order to perform sustained long term tests it is beneficial to have an automated tool that will not only perform the test actions but also allow you to analyze and interpret the results in a simple way.

Background

Over the last year, I have been working with Locust to provide a load testing framework for Eucalyptus clouds. Locust is generally used for load testing web pages but allows for customizable clients which allowed me to hook in our Eutester library in order to generate load. Once I had created my client, I was able to create Locust “tasks” that map to activities on the cloud. Tasks are user interactions like creating a bucket or deleting a volume. Once the tasks were defined I was able to compose them into user profiles that define which types of actions each simulated user will be able to run as well as weighting their probability so that the load can most closely approximate a real world use case. In order to make the deployment of EucaLoader as simple as possible, I have baked the entire deployment into a CloudFormation template. This means that once you have the basics of your deployment done, you can start stressing your cloud and analyzing the results with minimal effort.

Using EucaLoader

Prerequisites

In order to use EucaLoader you will first need to load up an Ubuntu Trusty image into your cloud as follows:

# wget https://cloud-images.ubuntu.com/trusty/current/trusty-server-cloudimg-amd64-disk1.img
# qemu-img convert -O raw trusty-server-cloudimg-amd64-disk1.img trusty-server-cloudimg-amd64-disk1.raw
# euca-install-image -i trusty-server-cloudimg-amd64-disk1.raw -n trusty -r x86_64 -b trusty --virt hvm

We will also need to clone the EucaLoader repository and install its dependencies:

# git clone https://github.com/viglesiasce/euca-loader
# pip install troposphere

Next we will upload credentials for a test account to our objectstore so that our loader can pull them down for Eutester to use:

# euare-accountcreate loader
# euca_conf --get-credentials  loader.zip --cred-account loader
# s3cmd mb s3://loader
# s3cmd put -P loader.zip s3://loader/admin.zip


Launching the stack

Once inside the euca-loader directory we will create our CloudFormation template and then create our stack by passing in the required parameters:

# ./create-locust-cfn-template.py > loader.cfn
# euform-create-stack --template-f loader.cfn  loader -p KeyName=<your-keypair-name> -p CredentialURL='http://<your-user-facing-service-ip>:8773/services/objectstorage/loader/admin.zip' -p ImageID=<emi-id-for-trusty> -p InstanceType=m1.large

At this point you should be able to monitor the stack creation with the following commands

# euform-describe-stacks
# euform-describe-stack-events loader

Once the stack shows as CREATE_COMPLETE, the describe stacks command should show outputs which point you to the Locust web portal (WebPortalUrl) and to your Grafana dashboard for monitoring trends (GrafanaURL).


Starting the tests

In order to start your user simulation, point your web browser to the the WebPortalUrl as defined by the describe stacks output. Once there you can enter the amount of users you’d like to simulate as well as how quickly those users should “hatch”.

Locust-start-test

Once you’ve started the test, the statistics for each type of requests will begin to show up in the Locust dashboard.

Locust-test-running


See your results

In order to better visualize the trends in your results, EucaLoader provides a Grafana dashboard that tracks a few of the requests for various metrics. This dashboard is easily customized to your particular test and is meant as a jumping off point.

Locust-dashboard

Standard
Eucalyptus

Deploying Cassandra and Consul with Chef Provisioning

ConsulCassandra

Introduction

Chef Provisioning (née Chef Metal) is an incredibly flexible way to deploy infrastructure. Its many plugins allow users to develop a single methodology for deploying an application that can then be repeated against many types of infrastructure (AWS, Euca, Openstack, etc). Chef provisioning is especially useful when deploying clusters of machines that make up an application as it allows for machines to be:

  • Staged before deployment
  • Batched for parallelism
  • Deployed in serial when necessary

This level of flexibility means that deploying interesting distributed systems like Cassandra and Consul is a breeze. By leveraging community cookbooks for Consul and Cassandra, we can largely ignore the details of package installation and service management and focus our time on orchestrating the stack in the correct order and configuring the necessary attributes such that our cluster converges properly. For this tutorial we will be deploying:

  • DataStax Cassandra 2.0.x
  • Consul
    • Service discovery via DNS
    • Health checks on a per node basis
  • Consul UI
    • Allows for service health visualization

Once complete we will be able to use Consul’s DNS service to load balance our Cassandra client requests across the cluster as well as use Consul UI in order to keep tabs on our clusters’ health.

In the process of writing up this methodology, I went a step further and created a repository and toolchain for configuring and managing the lifecycle of clustered deployments. The chef-provisioning-recipes repository will allow you to configure your AWS/Euca cloud credentials and images and deploy any of the clustered applications available in the repository.

Steps to reproduce

Install prerequisites

  • Install ChefDK
  • Install package deps (for CentOS 6)
    yum install python-devel gcc git
  • Install python deps:
    easy_install fabric PyYaml
  • Clone the chef-provisioning-recipes repo:
    git clone https://github.com/viglesiasce/chef-provisioning-recipes

Edit config file

The configuration file (config.yml) contains information about how and where to deploy the cluster. There are two main sections in the file:

  1. Profiles
    1. Which credentials/cloud to use
    2. What image to use
    3. What instance type to use
    4. What username to use
  2. Credentials
    1. Cloud endpoints or region
    2. Cloud access and secret keys

Edit the config.yml file found in the repo such that the default profile points to a CentOS 6 image in your cloud and the default credentials point to the proper cloud.

Run the deployment

Once the deployer has been configured we simply need to run it and tell it which cluster we would like to deploy. In this case we’d like to deploy Cassandra so we will run the deployer as follows:

./deployer.py cassandra

This will now automate the following process:

  1. Create a chef repository
  2. Download all necessary cookbooks
  3. Create all necessary instances
  4. Deploy Cassandra and Consul

Once this is complete you should be able to see your instances running in your cloud tagged as follows: cassandra-default-N. In order to access your Consul UI dashboard go to http://instance-pub-ip:8500

You should now also be able to query any of your Consul servers for the IPs of your Cassandra cluster:

nslookup cassandra.service.paas.home &amp;amp;amp;lt;instance-pub-ip&amp;amp;amp;gt;

In order to tear down the cluster simply run:

./deployer.py cassandra --op destroy
Standard
Eucalyptus

Chef Metal with Eucalyptus

Introduction

My pull request to chef-metal-fog was recently accepted and released in version 0.8.0 so a quick post on how to get up and running on your Eucalyptus Cloud seemed appropriate.

Chef Metal is a new way to provision your infrastructure using Chef recipes. It allows you to use the same convergent design as normal Chef recipes. You can now define your cloud or bare metal deployment in a Chef recipe then deploy, update and destroy it with chef-client. This flexibility is incredibly useful for both development of new Chef cookbooks and in exploring various topologies of distributed systems.

Game time

First, install the Chef Development Kit. This will install chef-client and a few other tools to get you well on your way to Chef bliss.

Once you have installed the Chef DK on your workstation, install the chef-metal gem into the Chef Ruby environment:

chef gem install chef-metal

You will need to create your Chef repo. This repository will contain all the information about how and where your application gets deployed using Chef Metal. In this case we are naming our app “euca-metal”.

chef generate app euca-metal

You should now see a directory structure as follows:

├── README.md
└── cookbooks
 └── euca-metal
   ├── Berksfile
   ├── chefignore
   ├── metadata.rb
   └── recipes
     └── default.rb

Now that the skeleton of our application has been created lets edit cookbooks/euca-metal/recipes/default.rb to look like this:

require 'chef_metal_fog'

### Arbitrary name of our deployment
deployment_name ='chef-metal-test'

### Use the AWS provider to provision the machines
### Here is where we set our endpoint URLs and keys for our Eucalyptus deployment
with_driver 'fog:AWS', :compute_options => { :aws_access_key_id => 'XXXXXXXXXXXXXXX',
                                             :aws_secret_access_key => 'YYYYYYYYYYYYYYYYYYYYYYYYYY',
                                             :ec2_endpoint => 'http://compute.cloud:8773/services/compute',
                                             :iam_endpoint => 'http://euare.cloud:8773/services/objectstorage'
}

### Create a keypair named after our deployment
fog_key_pair deployment_name do
  allow_overwrite true
end

### Use the key created above to login as root, all machines below
### will be run using these options
with_machine_options ssh_username: 'root', ssh_timeout: 60, :bootstrap_options => {
  :image_id => 'emi-A6EA57D5',
  :flavor_id => 't1.micro',
  :key_name => deployment_name
}

### Launch an instance and name it after our deployment
machine deployment_name do
  ### Install Java on the instance using the Java recipe
  recipe 'java'
end

Once we have defined our deployment we will need to create a local configuration file for chef-client:

mkdir -p .chef; echo 'local_mode true' > .chef/knife.rb

Now that we have defined the deployment and setup chef-client, lets run the damn thing!

chef-client -z cookbooks/euca-metal/recipes/default.rb

You can now see Chef create your keypair, launch your instance, and then attempt to run the “java” recipe as we specified. Unfortunately this has failed. We never told our euca-metal cookbook that it required the Java cookbook nor did we download that cookbook for it to use. Let’s fix that.

First we will tell our euca-metal cookbook that we need it to pull in the ‘java’ cookbook in order to provision the node. We need to add the ‘depends’ line to our cookbook’s metadata.rb file which can be found here: cookbooks/euca-metal/metadata.rb

name 'euca-metal'
maintainer ''
maintainer_email ''
license ''
description 'Installs/Configures euca-metal'
long_description 'Installs/Configures euca-metal'
version '0.1.0'
depends 'java'

Next we will need to actually download that Java cookbook that we now depend on. To do that we need to:

# Change to the euca-metal cookbook directory
cd cookbooks/euca-metal/
# Use berkshelf to download our cookbook dependencies
berks vendor
# Move the berks downloaded cookbooks to our main cookbook repository
# Note that it wont overwrite our euca-metal cookbook
mv berks-cookbooks/* ..
cd ../..
# Rerun our chef-client to deploy Java for realz
chef-client -z cookbooks/euca-metal/recipes/default.rb

You will notice that the machine is not reprovisioned (YAY convergence!). The Java recipe should now be running happily on your existing instance. You can find your ssh keys in the .chef/keys directory.

Happy AWS Compatible Private Cloud Cheffing!!!!

Many thanks to John Keiser for his great work on chef-metal.

Standard
Eucalyptus

Install Eucalyptus 4.0 Using Motherbrain and Chef

 

Introduction

Installing distributed systems can be a tedious and time consuming process. Luckily there are many solutions for distributed configuration management available to the open source community. Over the past few months, I have been working on the Eucalyptus cookbook which allows for standardized deployments of Eucalyptus using Chef. This functionality has already been implemented in MicroQA using individual calls to Knife (the Chef command line interface) for each machine in the deployment. Orchestration of the deployment is rather static and thus only 3 topologies have been implemented as part of the deployment tab.

Last month, Riot Games released Motherbrain, their orchestration framework that allows flexible, repeatable, and scalable deployment of multi-tiered applications. Their approach to the deployment roll out problem is simple and understandable. You configure manifests that define how your application components are split up then define the order in which they should be deployed.

For example in the case of Eucalyptus we have cluster, node, and frontend components. Each component is a set of recipes from the Eucalyptus cookbook. Once we have recipes mapped to components we need to define the order in which these components should be rolled out in the “stack order” section of our Motherbrain manifest:

stack_order do
bootstrap ‘cloud::full’
bootstrap ‘cloud::default’
bootstrap ‘cloud::frontend’
bootstrap ‘cluster::default’
bootstrap ‘cluster::cluster-controller’
bootstrap ‘cluster::storage-controller’
bootstrap ‘cloud::user-facing’
bootstrap ‘cloud::walrus’
bootstrap ‘cloud::user-console’
bootstrap ‘node::default’
bootstrap ‘cloud::configure’
bootstrap ‘nuke::default’
end

Once we have the components split up and ordered we need to define our topology. This can we done with another JSON formatted manifest like so:

{“nodes”: [
{ “groups”: [“cloud::frontend”, “cloud::configure”],
“hosts”: [“10.0.1.185”]
},
{ “groups”: [“cluster::default”],
“hosts”: [“10.0.1.186”]
},
{ “groups”: [“node::default”],
“hosts”: [“10.0.1.187”, “10.0.1.181”]
}]

}

With this information, Motherbrain allows you to create arbitrary topologies of your distributed system with repeatability and scalability taken care of. Repeatability comes from using Chef recipes and the scalability is derived from the nodes in each tier being deployed in parallel. In Eucalyptus terms, this means that no matter how many Node Controllers you’d like to deploy to your cluster, they system will come up in almost constant time. In order to tweak the configuration you can deploy your stack into a properly parameterized Chef environment.
Now that the concept has been laid out, lets get to business building our cluster from the 4.0 nightlies.

Installing prerequisites

I have created a script to install and configure Motherbrain and Chef that should work for Enterprise Linux or Mac OSX:

sh <(curl -s https://gist.githubusercontent.com/viglesiasce/9734682/raw/install_motherbrain.sh)

If you’d like to do the steps manually you can:

  1. Install ruby 2.0.0
  2. Install gems
    1. chef
    2. motherbrain
    3. chef-zero
  3. Get cookbooks and dependencies
    1. eucalyptus – https://github.com/eucalyptus/eucalyptus-cookbook
    2. ntp – https://github.com/opscode-cookbooks/ntp.git
    3. selinux – https://github.com/opscode-cookbooks/selinux.git
    4. yum – https://github.com/opscode-cookbooks/yum.git
  4. Upload all cookbooks to your Chef server
  5. Configure Motherbrain
    1. mb configure

Customizing your deployment

  1. Go into the Eucalyptus cookbook directory (~/chef-repo/cookbooks/eucalyptus)
  2. Edit the bootstrap.json file to match your deployment topology
    1. Ensure at least 1 IP/Machine for each component
    2. Same IP can be used for all machines (Cloud-in-a-box)
  3. Edit the environment file in ~/chef-repo/cookbooks/eucalyptus/environments/edge-nightly.json
    1. Change the topology configuration to match what you have defined in the bootstrap.json file
    2. Change the network config to match your Eucalyptus deployment
  4. Upload your environment to the Chef server
    1. knife environment from file environments/edge-nightly.json

Deploying your Eucalyptus Cloud

Now that we have defined our topology and network configuration we can deploy the cookbook using the Motherbrain command line interface by telling the tool:

  1. Which bootstrap configuration to use
  2. Which environment to deploy to

For example:

mb eucalyptus bootstrap bootstrap.json -e edge-nightly -v

Standard
Eucalyptus, QA

Testing Riak CS with Eucalyptus

EUCA+RIAK-CS

Introduction

One of the beautiful things about working with IaaS is the disposable nature of instances. If they are not behaving properly due to a bug or have been misconfigured for some reason, instances can be terminated and rebuilt with more ease than debugging a long lived and churned through Linux system. As a quality engineer, this dispensability has become invaluable in testing and developing new tools without needing to baby physical or virtual machines.

One of the projects I have been working on lately is an easy deployment of Riak CS into the cloud in order to quickly and repeatedly test the object storage integration provided by Eucalyptus in the 4.0 release. Riak CS is a scalable and distributed object store that provides an S3 interface for managing objects and buckets.

Before testing the Eucalyptus orchestration of Riak CS (or any tool/backend/service that Euca supports for that matter), it is important to understand the basic activities that Eucalyptus will be performing on behalf of the user. Thankfully, Neil Soman wrote a great blog post about how our Riak CS integration is designed.

In this model  we can see that we require:

  1. A multi-node Riak CS cluster
  2. A load balancer
  3. A machine to run the Eucalyptus Object Storage Gateway (OSG)

This topology is extremely simple to deploy in Eucalyptus 3.4 using our ELB and by using Vagrant to deploy our Riak CS cluster. Here’ s how to get your groove on.

Prerequisites

  1. CentOS 6 image loaded into your cloud
  2. Keypair imported or created in the cloud
  3. Security group authorized for port 8080,8000 and 22
  4. Install Vagrant

Deploy Riak CS

In order to deploy Riak CS in our cloud we will use Vagrant+Chef+Berkshelf as follows:

  1. Install Vagrant plugins using the following commands:
    • vagrant plugin install vagrant-berkshelf
      vagrant plugin install vagrant-omnibus
      vagrant plugin install vagrant-aws
  2. Import the dummy vagrant box necessary to use vagrant-aws:
    • vagrant box add centos dummy.box
  3. Clone the following repository
    • git clone https://github.com/viglesiasce/vagrant-riak-cs-cluster.git
  4. Edit the following items in the Vagrantfile to reflect the pre-requisites above and to point to your target cloud
    • aws.access_key_id
    • aws.secret_access_key
    • aws.keypair_name
    • aws.ami
    • override.ssh.private_key_path
    • aws.security_groups
    • aws.endpoint
  5. Set the number of nodes to deploy at the top of the Vagrantfile:
  6.  Once the cloud options are set start the Vagrant “up” process which will deploy the Riak CS nodes and Stanchion:
    • RIAK_CS_CREATE_ADMIN_USER=1 vagrant up --provider=aws
  7. Once Vagrant is complete, login to the first Riak CS node to get its private hostname:
    • vagrant ssh riak1 -c "curl http://169.254.169.254/latest/meta-data/local-hostname"
  8. Join each node to the first that was deployed. For example, to join the second node to the cluster I would run:
    • vagrant ssh riak2 -c "riak-admin cluster join riak@<riak1-private-hostname>"
      vagrant ssh riak2 -c "riak-admin cluster plan; riak-admin cluster commit"

In order to get your access and secret keys login to http://riak1-public-ip:8000

Load Balance Your Riak CS Cluster

  1. Create an ELB with the following command:
    • eulb-create-lb -z <AZ-of-your-riak-nodes> -l "lb-port=80, protocol=TCP, instance-port=8080,instance-protocol=TCP" RiakCS
  2. The command above will return you the DNS name that you will use as the endpoint for the “objectstorage.s3provider.s3endpoint” property when setting up the OSG. From the sample output below we would use “RiakCS-229524229045.lb.home”
    • DNS_NAME        RiakCS-229524229045.lb.home
  3. Register your Riak CS nodes with that load balancer:
    • eulb-register-instances-with-lb --instances <instance-id-1>,<instance-id-2> RiakCS

You have now successfully deployed a Riak CS cluster. You can stop here if you’d like but the real fun starts when you add IAM, ACL, versioning, multipart upload, and bucket lifecycle support to the mix using the Eucalyptus OSG.

True enthusiasts continue below.

Install and Configure the Eucalyptus OSG Tech Preview

  1. Spin up another CentOS 6 instance in the same security group as used above
  2. Follow the instructions found here to finish  the OSG installation and configuration, remember to use the DNS name returned in step 1 from above as the s3endpoint:
Standard
Eucalyptus, QA

Creating a Eucalyptus Test Harness: Jenkins,Testlink and Eutester

Introduction

This post will lead you through the creation of an environment to run automated test plans against a Eucalyptus installation. The tools used were described in my previous post, namely Jenkins (orchestrator), Testlink (planning & metadata), Eutester (functional testing framework). The use case we will be attacking is that of the Developer QA system sandbox.

The only requirements for this setup:

  1. Your machine can run Jenkins
  2. Python 2.6 or greater installed
  3. Python virtualenv is installed (easy_install virtualenv)
  4. Git is installed

Once complete you should be able to run test plans against a running cloud with the click of a button. This has only been tested on my Macbook but should work on any platform that can meet these requirements.

Getting setup

Installing Jenkins

The first step in getting this system up and running is to download and install Jenkins on the desired host machine. When completing this exercise I was using my Mac laptop which led me to use the installer found on the right side of this page: Download Jenkins. Further installation instructions for Linux and Windows can be found here: Installing Jenkins.

Once the installation is complete, point your we browser to port 8080 of the machine you installed Jenkins on, for example http://yourjenkins.machine:8080.  You should now be presented with the main Jenkins dashboard which looks like this:

Installing Jenkins Plugins

The setup we are trying to achieve requires that you install a Jenkins plugin. Luckily for us Jenkins makes this installation a breeze:

  1. Go to your Jenkins URL in a web browser
  2. Click Manage Jenkins on the left
    •  
  3. Click Manage Plugins
  4. Click the Available tab
    •  
  5. Install the following  required plugin by clicking the checkbox on the left of the row on all of them and then clicking “Download now and install after restart” at the bottom of the page
    • Testlink Plugin
  6. Once the installation is complete click the checkbox to restart Jenkins when there are no jobs running (ie now).

Configure Testlink Plugin

In order to access standardized test plans that are implemented by Eutester scripts and cataloged by Testlink we need to setup the Jenkins Testlink plugin as follows:

  1. Click Manage Jenkinson the left
    •  
  2. Click Configure System
  3. Scroll down to the Testlink Section and click the Add button to configure a new Testlink installation to use when pulling test plans and test cases
  4. Configure the options presented as follows:
    • Name: Euca-testlink
    • URL: http://testlink.euca-qa.com/testlink/lib/api/xmlrpc.php
    • Developer Key: 7928ee041d5b20ce3f356992d06ab401
  5. Click Save at the bottom of the page

Now that we have Jenkins and its Testlink plugin configured we need to add a job that uses this functionality. For users not wishing know the internals of the job, instructions follow for importing the pre-built job. For the advanced/curious user the internals of the job are explained in another blog post.

Importing the Pre-configured Job

  1. Go to  your Jenkins home directory (Mac: /Users/Shared/Jenkins/Home, Linux: /var/lib/jenkins)  in a terminal
  2. Create a directory called testlink-testplan in the $JENKINS_HOME/jobs directory
  3. Copy this config file to that directory: Job Config
  4. Change the ownership of the entire testlink-testplan directory to the jenkins user:
    • chown -R jenkins:jenkins $JENKINS_HOME/jobs/testlink-testplan
  5. Go to the Manage Jenkins page then click Reload Configuration from Disk
  6. Upon returning to the main dashboard you should see your job ready to run.
  7. You can now skip down to the section below entitled Running and Viewing Your Test if you do not care to know the internals of the job.

Running and Viewing Your Test

Running the test

  1. In order to run the test job we have just configured return to the Jenkins main dashboard by clicking Jenkinsin the top left corner of the Jenkins page.
  2. Click the Build Now button on the left of the testlink-testplan job 
  3. Ensure that parameters are set properly. The defaults should work other than the topology which will require the IPs of the machines you are trying to test against.

Viewing the test

  1. On the left side of the screen you should see the Build History which should now show at least 1 job running. Look for a build with a blue progress bar under it.

  2. If you would like to see the console output of the currently running test click the blue progress bar.
  3. If you would like to see the main page for the build click the date above the blue progress bar. This screen shows a summary of the test results and any archived artifacts when the build has completed.
  4. The trend button will show a plot of the test duration over time.

You can rinse and repeat as many times as you’d like with this procedure.

Testcases and Testplans

Currently the only testplan available is CloudBasics but soon more testcases and plans will be added. I will make sure to update this blog post with the latest list.

Standard
Eucalyptus, Uncategorized

Bridging the Gap: Manual to Automated Testing

Introduction:

Manual testing is a key element to a hardened QA process allowing for the human mind to be used in tasks that lend themselves to critical thinking or are simply not automatable such as:

  • Exploratory (also called ad-hoc) which allows testers to test strange and unusual codepaths
  • Use Case which intends to simulate as closely as possible the end to end solution that will be deployed
  • Usablility which aims to test whether the product meets the timing, performance, and accessibility needs of the end user

The only way to have enough time to do these kinds of tests (my favorite is exploratory) is to automate as much of the functional and regression testing as you can. Once you know a proper and reliable procedure for producing a functional test it is time to stick that in a regression suite so that it no longer needs to be run by a human.

At Eucalyptus, we have a great system for provisioning and testing a Eucalyptus install (developed by Kyo Lee) which is currently in its third iteration. As new features are added and as the codebase expands, we have identified the need to tie this system into others in order to make a more fluid transition of our manual tests into the automation system. Currently we use three main systems for evaluating the quality of Eucalyptus:

  • Jenkins to continuously build the software
  • Testlink (an open source test plan management tool) to maintain our manual tests
  • Automated provisioning system to run regression suites against the code base.
The key to the integration of these three existing systems will be the use of Jenkins to orchestrate jobs, Testlink to hold test meta-data, and the existing automated system for provisioning a Eucalyptus install.  The flow of adding a test case into the automation system will be as follows:
  1. Upload your test script (hopefully written with Eutester) to a git repository.
  2. Create a testcase in Testlink that defines the procedure and any necessary preconditions. If the case to be automated is already in manual testing, this can be skipped.
  3. Add info to the testcase that describes where your new test script lives.
  4. Add the test case to a testplan through Testlink.
Once the test case has been entered to a testplan, there are two things that can happen:
  • If the test was added to the continuous integration suites, it will automatically run against all of our platforms
  • If the test was added to a personal testplan, it can be kicked off manually using the Jenkins UI

With this flow defined lets take a look at  how the individual pieces will interface.

Interfaces:

Jenkins to Testlink:

Testlink is a flexible tool that allows testplans with great complexity to be organized efficiently. Although this on its own makes Testlink my top choice for multi-platform manual test organization, the real secret sauce comes from the API that it presents.  The API gives you programatic access to the metadata of any testplan by using an XML-RPC interface. In order to access this API through Jenkins, I could hack together a set of Python scripts to speak Testlink but instead all that work has been done for me in the Testlink Plugin. This plugin allows me (as part of a Jenkins “build”) to pull in a testplan with all of its testcases and their respective custom fields and execute a build step/script for each case.  In order to take advantage of this I have added a few crucial custom fields to each test case in Testlink:

  • Git Repository
  • Test Script Path
  • Test Script
  • Script Arguments
With these few pieces of information we can allow anybody with a reachable git repo to add a testcase that can be run through automated QA. The testcase in testlink will look like this:
The script I would use against each testcase would look something like:

In the above build step I am simply cloning the repo that the testcase exists in, then running the script with the prescribed arguments. This gives the test case creator enough extensibility to create complex testcases while standardizing the procedure of submitting a test case to the system. Another advantage of including a configurable arguments field is that the same script can be parameterized to execute many test cases.

Jenkins to QA Provisioning

Our current QA system has recently been upgraded to include a user facing API for deploying Eucalyptus against pre-defined topologies, network modes, and distributions. In order to hook into this system we will be using Python and the json library to parse the data returned from the QA system. The types of calls we will be issuing are:

  • Start provisioning
  • Wait for provisioning to complete
  • Free machines used in test
These steps will each be thrown into a Jenkins job that will allow us to re-use the code across many other jobs using the Parameterized Build Trigger Plugin.

Use Cases:

In this blog post we have shown how this integration can be used to quickly move a test from being manually tested to being automated but in order for this integration to be a true success it must deliver on at least these other use cases which will be described in further posts:

  • Continuous Integration
  • Test Report Generation
  • Developer QA system sandbox

Would you like to help us continue to innovate our QA process? We are looking for talented engineers to work with us on the next generation of cloud testing. Click here to apply

Standard