Eucalyptus, QA

EucaLoader: Load Testing Your Eucalyptus Cloud

Locust-full-page

Introduction

After provisioning a cloud that will be used by many users, it is best practice to do load or burn in testing to ensure that it meets your stability and scale requirements. These activities can be performed manually by running commands to run many  instances or create many volumes for example. In order to perform sustained long term tests it is beneficial to have an automated tool that will not only perform the test actions but also allow you to analyze and interpret the results in a simple way.

Background

Over the last year, I have been working with Locust to provide a load testing framework for Eucalyptus clouds. Locust is generally used for load testing web pages but allows for customizable clients which allowed me to hook in our Eutester library in order to generate load. Once I had created my client, I was able to create Locust “tasks” that map to activities on the cloud. Tasks are user interactions like creating a bucket or deleting a volume. Once the tasks were defined I was able to compose them into user profiles that define which types of actions each simulated user will be able to run as well as weighting their probability so that the load can most closely approximate a real world use case. In order to make the deployment of EucaLoader as simple as possible, I have baked the entire deployment into a CloudFormation template. This means that once you have the basics of your deployment done, you can start stressing your cloud and analyzing the results with minimal effort.

Using EucaLoader

Prerequisites

In order to use EucaLoader you will first need to load up an Ubuntu Trusty image into your cloud as follows:

# wget https://cloud-images.ubuntu.com/trusty/current/trusty-server-cloudimg-amd64-disk1.img
# qemu-img convert -O raw trusty-server-cloudimg-amd64-disk1.img trusty-server-cloudimg-amd64-disk1.raw
# euca-install-image -i trusty-server-cloudimg-amd64-disk1.raw -n trusty -r x86_64 -b trusty --virt hvm

We will also need to clone the EucaLoader repository and install its dependencies:

# git clone https://github.com/viglesiasce/euca-loader
# pip install troposphere

Next we will upload credentials for a test account to our objectstore so that our loader can pull them down for Eutester to use:

# euare-accountcreate loader
# euca_conf --get-credentials  loader.zip --cred-account loader
# s3cmd mb s3://loader
# s3cmd put -P loader.zip s3://loader/admin.zip


Launching the stack

Once inside the euca-loader directory we will create our CloudFormation template and then create our stack by passing in the required parameters:

# ./create-locust-cfn-template.py > loader.cfn
# euform-create-stack --template-f loader.cfn  loader -p KeyName=<your-keypair-name> -p CredentialURL='http://<your-user-facing-service-ip>:8773/services/objectstorage/loader/admin.zip' -p ImageID=<emi-id-for-trusty> -p InstanceType=m1.large

At this point you should be able to monitor the stack creation with the following commands

# euform-describe-stacks
# euform-describe-stack-events loader

Once the stack shows as CREATE_COMPLETE, the describe stacks command should show outputs which point you to the Locust web portal (WebPortalUrl) and to your Grafana dashboard for monitoring trends (GrafanaURL).


Starting the tests

In order to start your user simulation, point your web browser to the the WebPortalUrl as defined by the describe stacks output. Once there you can enter the amount of users you’d like to simulate as well as how quickly those users should “hatch”.

Locust-start-test

Once you’ve started the test, the statistics for each type of requests will begin to show up in the Locust dashboard.

Locust-test-running


See your results

In order to better visualize the trends in your results, EucaLoader provides a Grafana dashboard that tracks a few of the requests for various metrics. This dashboard is easily customized to your particular test and is meant as a jumping off point.

Locust-dashboard

Standard
Eucalyptus, QA

Testing Riak CS with Eucalyptus

EUCA+RIAK-CS

Introduction

One of the beautiful things about working with IaaS is the disposable nature of instances. If they are not behaving properly due to a bug or have been misconfigured for some reason, instances can be terminated and rebuilt with more ease than debugging a long lived and churned through Linux system. As a quality engineer, this dispensability has become invaluable in testing and developing new tools without needing to baby physical or virtual machines.

One of the projects I have been working on lately is an easy deployment of Riak CS into the cloud in order to quickly and repeatedly test the object storage integration provided by Eucalyptus in the 4.0 release. Riak CS is a scalable and distributed object store that provides an S3 interface for managing objects and buckets.

Before testing the Eucalyptus orchestration of Riak CS (or any tool/backend/service that Euca supports for that matter), it is important to understand the basic activities that Eucalyptus will be performing on behalf of the user. Thankfully, Neil Soman wrote a great blog post about how our Riak CS integration is designed.

In this model  we can see that we require:

  1. A multi-node Riak CS cluster
  2. A load balancer
  3. A machine to run the Eucalyptus Object Storage Gateway (OSG)

This topology is extremely simple to deploy in Eucalyptus 3.4 using our ELB and by using Vagrant to deploy our Riak CS cluster. Here’ s how to get your groove on.

Prerequisites

  1. CentOS 6 image loaded into your cloud
  2. Keypair imported or created in the cloud
  3. Security group authorized for port 8080,8000 and 22
  4. Install Vagrant

Deploy Riak CS

In order to deploy Riak CS in our cloud we will use Vagrant+Chef+Berkshelf as follows:

  1. Install Vagrant plugins using the following commands:
    • vagrant plugin install vagrant-berkshelf
      vagrant plugin install vagrant-omnibus
      vagrant plugin install vagrant-aws
  2. Import the dummy vagrant box necessary to use vagrant-aws:
    • vagrant box add centos dummy.box
  3. Clone the following repository
    • git clone https://github.com/viglesiasce/vagrant-riak-cs-cluster.git
  4. Edit the following items in the Vagrantfile to reflect the pre-requisites above and to point to your target cloud
    • aws.access_key_id
    • aws.secret_access_key
    • aws.keypair_name
    • aws.ami
    • override.ssh.private_key_path
    • aws.security_groups
    • aws.endpoint
  5. Set the number of nodes to deploy at the top of the Vagrantfile:
  6.  Once the cloud options are set start the Vagrant “up” process which will deploy the Riak CS nodes and Stanchion:
    • RIAK_CS_CREATE_ADMIN_USER=1 vagrant up --provider=aws
  7. Once Vagrant is complete, login to the first Riak CS node to get its private hostname:
    • vagrant ssh riak1 -c "curl http://169.254.169.254/latest/meta-data/local-hostname"
  8. Join each node to the first that was deployed. For example, to join the second node to the cluster I would run:
    • vagrant ssh riak2 -c "riak-admin cluster join riak@<riak1-private-hostname>"
      vagrant ssh riak2 -c "riak-admin cluster plan; riak-admin cluster commit"

In order to get your access and secret keys login to http://riak1-public-ip:8000

Load Balance Your Riak CS Cluster

  1. Create an ELB with the following command:
    • eulb-create-lb -z <AZ-of-your-riak-nodes> -l "lb-port=80, protocol=TCP, instance-port=8080,instance-protocol=TCP" RiakCS
  2. The command above will return you the DNS name that you will use as the endpoint for the “objectstorage.s3provider.s3endpoint” property when setting up the OSG. From the sample output below we would use “RiakCS-229524229045.lb.home”
    • DNS_NAME        RiakCS-229524229045.lb.home
  3. Register your Riak CS nodes with that load balancer:
    • eulb-register-instances-with-lb --instances <instance-id-1>,<instance-id-2> RiakCS

You have now successfully deployed a Riak CS cluster. You can stop here if you’d like but the real fun starts when you add IAM, ACL, versioning, multipart upload, and bucket lifecycle support to the mix using the Eucalyptus OSG.

True enthusiasts continue below.

Install and Configure the Eucalyptus OSG Tech Preview

  1. Spin up another CentOS 6 instance in the same security group as used above
  2. Follow the instructions found here to finish  the OSG installation and configuration, remember to use the DNS name returned in step 1 from above as the s3endpoint:
Standard
Eucalyptus, Uncategorized

Bridging the Gap: Manual to Automated Testing

Introduction:

Manual testing is a key element to a hardened QA process allowing for the human mind to be used in tasks that lend themselves to critical thinking or are simply not automatable such as:

  • Exploratory (also called ad-hoc) which allows testers to test strange and unusual codepaths
  • Use Case which intends to simulate as closely as possible the end to end solution that will be deployed
  • Usablility which aims to test whether the product meets the timing, performance, and accessibility needs of the end user

The only way to have enough time to do these kinds of tests (my favorite is exploratory) is to automate as much of the functional and regression testing as you can. Once you know a proper and reliable procedure for producing a functional test it is time to stick that in a regression suite so that it no longer needs to be run by a human.

At Eucalyptus, we have a great system for provisioning and testing a Eucalyptus install (developed by Kyo Lee) which is currently in its third iteration. As new features are added and as the codebase expands, we have identified the need to tie this system into others in order to make a more fluid transition of our manual tests into the automation system. Currently we use three main systems for evaluating the quality of Eucalyptus:

  • Jenkins to continuously build the software
  • Testlink (an open source test plan management tool) to maintain our manual tests
  • Automated provisioning system to run regression suites against the code base.
The key to the integration of these three existing systems will be the use of Jenkins to orchestrate jobs, Testlink to hold test meta-data, and the existing automated system for provisioning a Eucalyptus install.  The flow of adding a test case into the automation system will be as follows:
  1. Upload your test script (hopefully written with Eutester) to a git repository.
  2. Create a testcase in Testlink that defines the procedure and any necessary preconditions. If the case to be automated is already in manual testing, this can be skipped.
  3. Add info to the testcase that describes where your new test script lives.
  4. Add the test case to a testplan through Testlink.
Once the test case has been entered to a testplan, there are two things that can happen:
  • If the test was added to the continuous integration suites, it will automatically run against all of our platforms
  • If the test was added to a personal testplan, it can be kicked off manually using the Jenkins UI

With this flow defined lets take a look at  how the individual pieces will interface.

Interfaces:

Jenkins to Testlink:

Testlink is a flexible tool that allows testplans with great complexity to be organized efficiently. Although this on its own makes Testlink my top choice for multi-platform manual test organization, the real secret sauce comes from the API that it presents.  The API gives you programatic access to the metadata of any testplan by using an XML-RPC interface. In order to access this API through Jenkins, I could hack together a set of Python scripts to speak Testlink but instead all that work has been done for me in the Testlink Plugin. This plugin allows me (as part of a Jenkins “build”) to pull in a testplan with all of its testcases and their respective custom fields and execute a build step/script for each case.  In order to take advantage of this I have added a few crucial custom fields to each test case in Testlink:

  • Git Repository
  • Test Script Path
  • Test Script
  • Script Arguments
With these few pieces of information we can allow anybody with a reachable git repo to add a testcase that can be run through automated QA. The testcase in testlink will look like this:
The script I would use against each testcase would look something like:

In the above build step I am simply cloning the repo that the testcase exists in, then running the script with the prescribed arguments. This gives the test case creator enough extensibility to create complex testcases while standardizing the procedure of submitting a test case to the system. Another advantage of including a configurable arguments field is that the same script can be parameterized to execute many test cases.

Jenkins to QA Provisioning

Our current QA system has recently been upgraded to include a user facing API for deploying Eucalyptus against pre-defined topologies, network modes, and distributions. In order to hook into this system we will be using Python and the json library to parse the data returned from the QA system. The types of calls we will be issuing are:

  • Start provisioning
  • Wait for provisioning to complete
  • Free machines used in test
These steps will each be thrown into a Jenkins job that will allow us to re-use the code across many other jobs using the Parameterized Build Trigger Plugin.

Use Cases:

In this blog post we have shown how this integration can be used to quickly move a test from being manually tested to being automated but in order for this integration to be a true success it must deliver on at least these other use cases which will be described in further posts:

  • Continuous Integration
  • Test Report Generation
  • Developer QA system sandbox

Would you like to help us continue to innovate our QA process? We are looking for talented engineers to work with us on the next generation of cloud testing. Click here to apply

Standard
Uncategorized

Eutester Basics Part III: Creating your first testcase

Intro to unittest

When writing test cases using Eutester one can simply write a single Python script that runs through from start to finish and exits when it encounters an error. This kind of script will be easy to put together but will not be very maintainable moving forward when you want to reuse routines in a different script or share your work with the community. In order to make sharing of test case code more efficient we have chosen to leverage the unittest library present in the standard library since Python 2.6. This library for creating, cataloging, and executing testcases adds value in creating cases by providing some constructs that will be familiar to anyone who has worked or contributed as a tester. In this tutorial I will show how to leverage unittest to write a test case that can be contributed back to the community or used as reproducible steps for a bug filed against Eucalyptus.

Unittest Basics

A test case can be thought of as a series of steps executed in a particular order that identify whether a system is operating properly. Each of these steps have an expected result. When the expected results are not met the case can be marked as a failure. Python unittest allows us to quickly create these test cases and reuse code from case to case by allowing us to inherit from the TestCase class.

import unittest
from eucaops import Eucaops

class MyFirstTest(unittest.TestCase):

    def setUp(self):
        self.tester = Eucaops(credpath="/home/ubuntu/.euca")
        self.keypair = self.tester.add_keypair()
        self.group = self.tester.add_group()
        self.tester.authorize_group(self.group)
        self.tester.authorize_group(self.group, port=-1, protocol="icmp")
        self.reservation = None

    def testInstance(self):
        #### INTERESTING STUFF GOES HERE
        pass        

    def tearDown(self):
        if self.reservation is not None:
            self.tester.terminate_instances(self.reservation)
        self.tester.delete_keypair(self.keypair)
        self.tester.local("rm " + self.keypair.name + ".pem")
        self.tester.delete_group(self.group)

As you can see from above, two of the major building blocks of a test case are the setup and teardown phases. The setup phase is giving the test case the necessary artifacts to execute. Our first case requires a keypair and a security group authorized for both SSH and Ping. The teardown phase will then remove the artifacts created regardless of whether the test passed or failed. Cleanup up after your tests will be very important as you begin to run many cases and suites against your system. Each test case should be able to run without leaving behind artifacts and without relying on artifacts that it did not create. Making tests idempotent will make it easier for developers, testers, and others to reproduce the exact condition you produced with your case.

Now that we have prepared our test case with both setup and teardown steps it is time to create a routine that actually tests something. Our test method will validate the following:

  1. Running an instance
  2. Network connectivity between the instance and the test machine
  3. Commands can be executed on the remote host

We will be filling out the testInstance method with the following:

def testInstance(self):
        image = self.tester.get_emi(root_device_type="instance-store")
        ### 1) Run an instance
        try:
            self.reservation = self.tester.run_instance(image, self.keypair.name, self.group.name)
        except Exception, e:
            self.fail("Caught an exception when running the instance: " + str(e))
        for instance in self.reservation.instances:
            ### 2) Ping the instance
            ping_result = self.tester.ping(instance.public_dns_name)
            self.assertTrue(ping_result, "Ping to instance failed")
            ### 3) Run command on instance
            uname_result = instance.sys("uname -r")
            self.assertNotEqual(len(uname_result), 0, "uname failed")

Upon inspection of this code you will notice that we are using  the assertion methods, another great element provided to us by the unittest framework. These methods are useful for making sure that your script does not continue to execute after a failure has been encountered. In our case we are using assertTrue to make sure the ping succeeded before proceeding. The next assertion we use is assertNotEqual when checking that the result of running the uname command actually has some output.

Running your test

Once we have setUp, tearDown, and one other test method, we are ready to run this testcase. In order to run the case we need to add a main function that will be called when we run this class from the command line. In order to do this add the following to the bottom of your testcase file:

if __name__ == '__main__':
    unittest.main()

After setting execution permissions, you will be able to call your testcase script directly from the command line. The trick here is that we prefaced our method name with the word “test” so the unittest library knows that it is a special method and requires that we run the setUp and tearDown methods first.  You can build out as many test methods as you’d like in this one file using the same setUp and tearDown methods.

Standard
Eucalyptus

Eutester Basics Part II: Setting up a development environment

Installing dependencies on system

  1. Check that Python 2.6 or greater is installed, in my case (Ubuntu 11.10) Python 2.7 is installed by default
  2. Install python setup-tools and python-virtualenv
apt-get install python-setuptools python-virtualenv git

Creating your virtualenv and installing required modules

  1. First create a fresh virtual Python environment in order to keep unwanted changes out of the main Python installation.  In my case I named it “eutester-dev”.
    • virtualenv eutester-dev
  2. In order to enter the environment that was just created for you enter the following command
    • source eutester-dev/bin/activate
  3. Now that we are in the virtualenv we can install the modules necessary for eutester
    • easy_install boto paramiko
  4. Download eutester from githuband install it in the virtualenv
    • git clone git://github.com/eucalyptus/eutester.git
    • cd eutester
    • python setup.py install

Setting up your Eucarc file

  1. If you are testing against Eucalyptus the credentials that were downloaded from the cloud are sufficient. Take note of the directory where the zip file was unzipped (ie the place where the eucarc file now resides). This path will be the “credpath” argument to the construction of your eutester or eucaops object.
  2. If you are testing against a different AWS compatibile cloud the following must be provided in a file named eucarc:
    • export EC2_URL=http://mycloud.hostname.com:8773/services/Eucalyptus
    • export S3_URL=http://mywalrus.hostname.com:8773/services/Walrus
    • export EC2_ACCESS_KEY=’XXXXXXXXXXXXXXXX’
    • export EC2_SECRET_KEY=’XXXXXxXXXXXXXXXXXXXXXXX’
  3. Make sure to put the correct hostnames for the EC2 and S3 URLs as well as the correct Secret and Access keys.

Max Spevack put together a great tutorial on setting up your environment for testing against EC2.

Configuring Python shell for tab complete and history

You should now be able to use your Python environment with eutester but in order to make development and testing easier you can use the Python shell with tab completion. To achieve this we will create a .pythonrc file in our users home directory that will allow setup the shell for us with tab complete every time we start it. Your ~/.pythonrc file should look something like this:

import atexit
import os
import readline
import rlcompleter

history = os.path.expanduser('~/.python_history')
readline.read_history_file(history)
readline.parse_and_bind('tab: complete')
atexit.register(readline.write_history_file, history)

Now add the PYTHONSTARTUP variable to your .bashrc (or equivalent) file and source the .bashrc:

echo "export PYTHONSTARTUP=~/.pythonrc" >> ~/.bashrc
source ~/.bashrc

Next well need to create the file to log our shell history:

touch ~/.python_history

Start the virtual Python environment shell using a call directly to the Python binary

~/eutester-dev/bin/python

You should now be able to hit TAB twice to see the available namespaces that you can use within the Python shell. For example, load up the Eucaops class then create a tester object, then type “tester.” and hit TAB twice. You should see a list of the properties and methods provided by the Eucaops class

ubuntu@ip-192-168-165-22:~$ ./eutester-dev/bin/python
Python 2.7.2+ (default, Oct 4 2011, 20:06:09)
[GCC 4.6.1] on linux2
Type “help”, “copyright”, “credits” or “license” for more information.
>>> from eucaops import Eucaops
>>> tester = Eucaops(credpath=”~/.euca”)

Standard
Eucalyptus

Eutester Basics Part I

Introduction

In my time working for Eucalyptus, I have begun the Eutester project that aims to provide the framework for cloud architects and administrators to validate and benchmark their various AWS-compatible cloud infrastructure options.  Enterprises and research institutions are rushing to roll out their strategies for moving their mission critical applications and services to on-premise, public, or hybrid clouds. The end result of this strategy should be a fault tolerant and reliable compute, storage, and networking infrastructure. Unfortunately, choosing a private or hybrid IaaS solution to implement is not the end of the road for a cloud strategy. The next level of planning involves choosing and validating the interconnected components of the cloud infrastructure stack. The considerations necessary include, but are not limited to, choosing network topologies, hypervisors, component topology, and storage devices. What many  will end up with is a few different ways to implement the end goal, a scalable and robust self service infrastructure. In order to make “apples to apples” comparisons of these options, it is necessary to have a test library that is agnostic to the underlying components and treats each cloud implementation equally by presenting results in a unified and comparable way.

Eutester Basics

Eutester is written in Python and leverages the boto and paramiko libraries to provide the necessary tools for quickly implementing an automated test strategy. Boto is used in order to provide connections to EC2, S3, and IAM. Paramiko is leveraged to provide root access to nodes that are running the individual cloud components. The eutester module is separated into two main classes: eutester, which provides the necessary logic to get a test bootstrapped, and eucaops, which inherits from eutester and provides predefined routines that can validate operations against the cloud. Here is an example of the difference in using the eutester class by itself rather than using eucaops. Notice that when using both eutester and eucaops you have full access to the underlying EC2 and S3 connections provided by boto.

>>> from eutester import Eutester
>>> from eucaops import Eucaops
>>> eutester = Eutester(credpath="../credentials")
>>> eucaops = Eucaops(credpath="../credentials")
>>> eutester.ec2.get_all_images()
[Image:eri-168B3564, Image:eki-E9A03638, Image:emi-5D824306, Image:emi-30823A1C]
>>> eucaops.get_emi()
Image:emi-5D824306
>>> eucaops.ec2.get_all_images()
[Image:eri-168B3564, Image:eki-E9A03638, Image:emi-5D824306, Image:emi-30823A1C]

As you can see the only required parameter to get a test up and running is simply the path to a credentials directory that contains a eucarc file that can be used to extract both the access and secret keys as well as the hostname of the EC2 and S3 implementation. In the case of Eucalyptus these are the Cloud Controller and Walrus. Once the boto connections have been established the testing is ready to begin and you can start to send commands to the cloud:

>>> emi = eucaops.get_emi()
>>> reservation = eucaops.run_instance(emi)
[2012-03-04 17:46:42,205] [EUTESTER] [DEBUG]: Attempting to run image Image:emi-5D824306 in group default
[2012-03-04 17:46:42,976] [EUTESTER] [DEBUG]: Beginning poll loop for the 1 found in Reservation:r-F4F842A4
[2012-03-04 17:46:42,976] [EUTESTER] [DEBUG]: Beginning poll loop for instance Instance:i-87FB415E to go to running
[2012-03-04 17:48:16,953] [EUTESTER] [DEBUG]: Instance(i-87FB415E) State(running) Poll(9) time elapsed (93)
[2012-03-04 17:48:16,953] [EUTESTER] [DEBUG]: Instance:i-87FB415E is now in running
[2012-03-04 17:48:16,953] [EUTESTER] [DEBUG]: Instance i-87FB415E now in running state

The output here shows that the instance we ran successfully went to the running state, as expected. Once the instance is running it may be useful to ping the instance from the test machine to ensure we have connectivity.

>>> instance = reservation.instances[0]
>>> eucaops.ping(instance.public_dns_name, poll_count=1)
[2012-03-04 18:08:38,075] [EUTESTER] [DEBUG]: Attempting to ping 192.168.47.202
[2012-03-04 18:08:38,076] [EUTESTER] [DEBUG]: [root@localhost]# ping -c 1 192.168.47.202
[2012-03-04 18:08:48,081] [EUTESTER] [DEBUG]: PING 192.168.47.202 (192.168.47.202) 56(84) bytes of data.

— 192.168.47.202 ping statistics —
1 packets transmitted, 0 received, 100% packet loss, time 0ms

[2012-03-04 18:08:48,081] [EUTESTER] [CRITICAL]: Was unable to ping address
False

We were unable to ping the instance likely because the group that we launched it in, “default,” did not have the proper ports authorized. Lets go back and authorize the ports and retry the ping:

>>> eucaops.authorize_group_by_name(group_name="default", protocol="icmp",port="-1")
[2012-03-04 18:11:35,755] [EUTESTER] [DEBUG]: Attempting authorization of group
>>> eucaops.ping(instance.public_dns_name, poll_count=1)
[2012-03-04 18:12:18,445] [EUTESTER] [DEBUG]: Attempting to ping 192.168.47.202
[2012-03-04 18:12:18,445] [EUTESTER] [DEBUG]: [root@localhost]# ping -c 1 192.168.47.202
[2012-03-04 18:12:18,457] [EUTESTER] [DEBUG]: PING 192.168.47.202 (192.168.47.202) 56(84) bytes of data.
64 bytes from 192.168.47.202: icmp_seq=1 ttl=63 time=6.49 ms

— 192.168.47.202 ping statistics —
1 packets transmitted, 1 received, 0% packet loss, time 0ms
rtt min/avg/max/mdev = 6.498/6.498/6.498/0.000 ms

[2012-03-04 18:12:18,457] [EUTESTER] [DEBUG]: Was able to ping address
True

Now we can see that the instance is reachable. In these few steps we have validated the folllowing:

  • Hypervisor is properly operating and can run VMs
  • The image we used can boot and start networking properly
  • Opening ports on a security group is working properly
  • Networking from the CLC to CC and CC to Node controller is working as expected

These may seem like trivial checks but should not be taken for granted when first turning up an installation of a cloud platform.  Along with the checks of functionality we also received some benchmarks for this particular run instance request in which it shows that for the instance to go to running it took 93 seconds. If we rerun those few steps with various images we can get a feel for the performance of different images on the same cloud. Similarly if we use the same image against clouds running different hypervisors we can get a feel for the prep time each hypervisor requires to bring up an instance.

Here we have seen a case where things more or less worked as expected and were easy to debug and get running. When things are not quite working the way we expect, eutester allows you to start and stop monitoring the logs from all of your Eucalyptus components. This requires a slightly different initialization of the Eucaops or Eutester object wherein we include a root password for the Eucalyptus component nodes as well as a small configuration file that provides the topology of the components and thier hostnames. In my next blog post I’ll go over how Eutester can help you when your cloud is not functioning properly.

For an initial API doc please see:

https://github.com/eucalyptus/eutester/wiki/API-Documentation

Raise issues on the Github project page at:

https://github.com/eucalyptus/eutester/issues

All contributions are welcome, whether it be testing, refactoring code, new use cases, or added functionality.

Come hang out at #eucalyptus on freenode to discuss any topics related to Eutester or Eucalyptus in general.

Standard