booting a node from a snapshot image

532 views
Skip to first unread message

Jacob Everist

unread,
Feb 26, 2016, 7:32:34 PM2/26/16
to cloudlab-users
Hello all,

I've been doing my best all week to try and solve this problem.  I am trying to create an Openstack controller node with a prepared custom image in the glance repository available for spinning up an instance.  My approach to doing this is the following:

1) Select default Openstack profile (currently Liberty, but I tried Kilo as well).  

2) copy profile to custom_image name

3) start custom_image profile experiment

4) point web browser to controller node and start ubuntu instance

5) make modifications to running instance and shutdown

6) snapshot shutdown instance (now in glance repository)

7) back to CloudLab experiment page, snapshot the controller node (contains glance repo)

8) receive URN by email, add URN to the disk_image attribute of the controller node in geni script.

9) save edited profile, start new experiment


At this point, the experiment starts, but the controller node fails to start.  I took me a long while to realize that I actually have to manually edit the Geni script to boot from the snapshotted image, but I don't seem to be able to do this successfully.    I am using the default script from the Openstack Liberty profile and only changing one line.  I have tried using both the URN and the URL sent by email, but not neither seem to be successful.

Any advice?

Currently failed to boot experiments are:

Attached is the geni script with the modified line on 489.

Attached is also the console log of the failed controller node.

------
#!/usr/bin/env python

import geni.portal as portal
import geni.rspec.pg as RSpec
import geni.rspec.igext as IG
from lxml import etree as ET
import crypt
import random

# Don't want this as a param yet
TBCMD = "sudo mkdir -p /root/setup && sudo -H /tmp/setup/setup-driver.sh 2>&1 | sudo tee /root/setup/setup-driver.log"

#
# Create our in-memory model of the RSpec -- the resources we're going to request
# in our experiment, and their configuration.
#
rspec = RSpec.Request()

#
# This geni-lib script is designed to run in the CloudLab Portal.
#
pc = portal.Context()

#
# Define *many* parameters; see the help docs in geni-lib to learn how to modify.
#
pc.defineParameter("release","OpenStack Release",
                   portal.ParameterType.STRING,"liberty",[("liberty","Liberty"),("kilo","Kilo"),("juno","Juno")],
                   longDescription="We provide either OpenStack Liberty (Ubuntu 15.10); Kilo (Ubuntu 15.04); or Juno (Ubuntu 14.10).  OpenStack is installed from packages available on these distributions.")
pc.defineParameter("computeNodeCount", "Number of compute nodes (at Site 1)",
                   portal.ParameterType.INTEGER, 1)
pc.defineParameter("publicIPCount", "Number of public IP addresses",
                   portal.ParameterType.INTEGER, 4,
                   longDescription="Make sure to include both the number of floating IP addresses you plan to need for instances; and also for OpenVSwitch interface IP addresses.  Each OpenStack network this profile creates for you is bridged to the external, public network, so you also need a public IP address for each of those switch interfaces.  So, if you ask for one GRE tunnel network, and one flat data network (the default configuration), you would need two public IPs for switch interfaces, and then you request two additional public IPs that can be bound to instances as floating IPs.  If you ask for more networks, make sure to increase this number appropriately.")
pc.defineParameter("osNodeType", "Hardware type of all nodes",
                   portal.ParameterType.STRING, "",
                   longDescription="A specific hardware type to use for each node.  Cloudlab clusters all have machines of specific types.  When you set this field to a value that is a specific hardware type, you will only be able to instantiate this profile on clusters with machines of that type.  If unset, when you instantiate the profile, the resulting experiment may have machines of any available type allocated.")
pc.defineParameter("osLinkSpeed", "Experiment Link Speed of all nodes",
                   portal.ParameterType.INTEGER, 0,
                   [(0,"Any"),(1000000,"1Gb/s"),(10000000,"10Gb/s")],
                   longDescription="A specific link speed to use for each node.  All experiment network interfaces will request this speed.")


pc.defineParameter("doAptUpgrade","Upgrade OpenStack packages and dependencies to the latest versions",
                   portal.ParameterType.BOOLEAN, False,advanced=True,
                   longDescription="The default images this profile uses have OpenStack and dependent packages preloaded.  To guarantee that these scripts always work, we no longer upgrade to the latest packages by default, to avoid changes.  If you want to ensure you have the latest packages, you should enable this option -- but if there are setup failures, we can't guarantee support.  NOTE: selecting this option requires that you also select the option to update the Apt package cache!")
pc.defineParameter("doAptInstall","Install required OpenStack packages and dependencies",
                   portal.ParameterType.BOOLEAN, True,advanced=True,
                   longDescription="This option allows you to tell the setup scripts not to install or upgrade any packages (other than the absolute dependencies without which the scripts cannot run).  If you start from bare images, or select a profile option that may trigger a package to be installed, we may need to install packages for you; and if you have disabled it, we might not be able to configure these features.  This option is really only for people who want to configure only the openstack packages that are already installed on their disk images, and not be surprised by package or database schema upgrades.  NOTE: this option requires that you also select the option to update the Apt package cache!")
pc.defineParameter("doAptUpdate","Update the Apt package cache before installing any packages",
                   portal.ParameterType.BOOLEAN, True,advanced=True,
                   longDescription="This parameter is a bit dangerous.  We update the Apt package cache by default in case we need to install any packages (i.e., if your base image doesn't have OpenStack packages preinstalled, or is missing some package that the scripts must have).  If the cache is outdated, and Apt tries to download a package, that package version may no longer exist on the mirrors.  Only disable this option if you want to minimize the risk that currently-installed pacakges will be upgraded due to dependency pull-in.  Of course, by not updating the package cache, you may not be able to install any packages (and if these scripts need to install packages for you, they may fail!), so be careful with this option.")
pc.defineParameter("fromScratch","Install OpenStack packages on a bare image",
                   portal.ParameterType.BOOLEAN,False,advanced=True,
                   longDescription="If you do not mind waiting awhile for your experiment and OpenStack instance to be available, you can select this option to start from one of our standard Ubuntu disk images; the profile setup scripts will then install all necessary packages.  NOTE: this option may only be used at x86 cluster (i.e., not the \"Utah Cluster\") for now!  NOTE: this option requires that you select both the Apt update and install package options above!")
pc.defineParameter("flatDataLanCount","Number of Flat Data Networks",
                   portal.ParameterType.INTEGER,1,advanced=True,
                   longDescription="Create a number of flat OpenStack networks.  If you do not select the Multiplex Flat Networks option below, each of these networks requires a physical network interface.  If you attempt to instantiate this profile on nodes with only 1 experiment interface, and ask for more than one flat network, your profile will not instantiate correctly.  Many CloudLab nodes have only a single experiment interface.")
pc.defineParameter("greDataLanCount","Number of GRE Tunnel Data Networks",
                   portal.ParameterType.INTEGER,1,advanced=True,
                   longDescription="To use GRE tunnels, you must have at least one flat data network; all tunnels are implemented using the first flat network!")
pc.defineParameter("vlanDataLanCount","Number of VLAN Data Networks",
                   portal.ParameterType.INTEGER,0,advanced=True,
                   longDescription="If you want to play with OpenStack networks that are implemented using real VLAN tags, create VLAN-backed networks with this parameter.  Currently, however, you cannot combine it with Flat nor Tunnel data networks.")
pc.defineParameter("vxlanDataLanCount","Number of VXLAN Data Networks",
                   portal.ParameterType.INTEGER,0,
                   longDescription="To use VXLAN networks, you must have at least one flat data network; all tunnels are implemented using the first flat network!",
                   advanced=True)

pc.defineParameter("managementLanType","Management Network Type",
                   portal.ParameterType.STRING,"vpn",[("vpn","VPN"),("flat","Flat")],
                   advanced=True,longDescription="This profile creates a classic OpenStack setup, where services communicate not over the public network, but over an isolated private management network.  By default, that management network is implemented as a VPN hosted on the public network; this allows us to not use up a physical experiment network interface just to host the management network, and leaves that unused interface available for OpenStack data networks.  However, if you are using multiplexed Flat networks, you can also make this a Flat network, and it will be multiplexed along with your other flat networks---isolated by VLAN tags.  These VLAN tags are internal to CloudLab, and are invisible to OpenStack.")

pc.defineParameter("multiplexFlatLans", "Multiplex Flat Networks",
                   portal.ParameterType.BOOLEAN, False,
                   longDescription="Multiplex any flat networks (i.e., management and all of the flat data networks) over physical interfaces, using VLANs.  These VLANs are invisible to OpenStack, unlike the NUmber of VLAN Data Networks option, where OpenStack assigns the real VLAN tags to create its networks.  On CloudLab, many physical machines have only a single experiment network interface, so if you want multiple flat networks, you have to multiplex.  Currently, if you select this option, you *must* specify 0 for VLAN Data Networks; we cannot support both simultaneously yet.",
                   advanced=True)

pc.defineParameter("computeNodeCountSite2", "Number of compute nodes at Site 2",
                   portal.ParameterType.INTEGER, 0,advanced=True,
                   longDescription="You can add additional compute nodes from other CloudLab clusters, allowing you to experiment with remote VMs controlled from the central controller at the first site.")

pc.defineParameter("ipAllocationStrategy","IP Addressing",
                   portal.ParameterType.STRING,"script",[("cloudlab","CloudLab"),("script","This Script")],
                   longDescription="Either let CloudLab auto-generate IP addresses for the nodes in your OpenStack networks, or let this script generate them.  If you include nodes at multiple sites, you must choose this script!  The default is this script, because the subnets CloudLab generates for flat networks are sized according to the number of physical nodes in your topology.  However, when the profile sets up your flat OpenStack networks, it tries to enable your VMs and physical nodes to talk to each other---so they all must be on the same subnet.  Thus, you may not have many IPs left for VMs.  However, if the script IP address generation is buggy or otherwise insufficient, you can fall back to CloudLab and see if that improves things.",
                   advanced=True)

pc.defineParameter("tokenTimeout","Keystone Token Expiration in Seconds",
                   portal.ParameterType.INTEGER,14400,advanced=True,
                   longDescription="Keystone token expiration in seconds.")

pc.defineParameter("sessionTimeout","Horizon Session Timeout in Seconds",
                   portal.ParameterType.INTEGER,14400,advanced=True,
                   longDescription="Horizon session timeout in seconds.")

pc.defineParameter("keystoneVersion","Keystone API Version",
                   portal.ParameterType.INTEGER,
                   0, [ (0,"(default)"),(2,"v2.0"),(3,"v3") ],advanced=True,
                   longDescription="Keystone API Version.  Defaults to v2.0 on Juno and Kilo; defaults to v3 on Liberty and onwards.  You can try to force v2.0 on Liberty and onwards, but we cannot guarantee support for this configuration.")
pc.defineParameter("keystoneUseMemcache","Keystone Uses Memcache",
                   portal.ParameterType.BOOLEAN,False,advanced=True,
                   longDescription="Specify whether or not Keystone should use Memcache as its token backend.  In our testing, this has seemed to exacerbate intermittent Keystone internal errors, so it is off by default, and by default, the SQL token backend is used instead.")
pc.defineParameter("keystoneUseWSGI","Keystone Uses WSGI",
                   portal.ParameterType.INTEGER,
                   2, [ (2,"(default)"),(1,"Yes"),(0,"No") ],advanced=True,
                   longDescription="Specify whether or not Keystone should use Apache/WSGI instead of its own server.  This is the default from Kilo onwards.  In our testing, this has seemed to slow down Keystone.")
pc.defineParameter("quotasOff","Unlimit Default Quotas",
                   portal.ParameterType.BOOLEAN,True,advanced=True,
                   longDescription="Set the default Nova and Cinder quotas to unlimited, at least those that can be set via CLI utils (some cannot be set, but the significant ones can be set).")

pc.defineParameter("disableSecurityGroups","Disable Security Group Enforcement",
                   portal.ParameterType.BOOLEAN,False,advanced=True,
                   longDescription="Sometimes it can be easier to play with OpenStack if you do not have to mess around with security groups at all.  This option selects a null security group driver, if set.  This means security groups are enabled, but are not enforced (we set the firewall_driver neutron option to neutron.agent.firewall.NoopFirewallDriver to accomplish this).")

pc.defineParameter("enableInboundSshAndIcmp","Enable Inbound SSH and ICMP",
                   portal.ParameterType.BOOLEAN,True,advanced=True,
                   longDescription="Enable inbound SSH and ICMP into your instances in the default security group, if you have security groups enabled.")

pc.defineParameter("enableNewSerialSupport","Enable new Juno serial consoles",
                   portal.ParameterType.BOOLEAN,False,advanced=True,
                   longDescription="Enable new serial console support added in Juno.  This means you can access serial consoles via web sockets from a CLI tool (not in the dashboard yet), but the serial console log will no longer be available for viewing!  Until it supports both interactivity and logging, you will have to choose.  We download software for you and create a simple frontend script on your controller node, /root/setup/novaconsole.sh , that when given the name of an instance as its sole argument, will connect you to its serial console.  The escape sequence is ~. (tilde,period), but make sure to use multiple tildes to escape through your ssh connection(s), so that those are not disconnected along with your console session.")

pc.defineParameter("ceilometerUseMongoDB","Use MongoDB in Ceilometer",
                   portal.ParameterType.BOOLEAN,False,advanced=True,
                   longDescription="Use MongoDB for Ceilometer instead of MySQL (with Ubuntu 14 and Juno, we have observed crashy behavior with MongoDB, so the default is MySQL; YMMV.")

pc.defineParameter("enableVerboseLogging","Enable Verbose Logging",
                   portal.ParameterType.BOOLEAN,False,advanced=True,
                   longDescription="Enable verbose logging for OpenStack components.")
pc.defineParameter("enableDebugLogging","Enable Debug Logging",
                   portal.ParameterType.BOOLEAN,False,advanced=True,
                   longDescription="Enable debug logging for OpenStack components.")

pc.defineParameter("controllerHost", "Name of controller node",
                   portal.ParameterType.STRING, "ctl", advanced=True,
                   longDescription="The short name of the controller node.  You shold leave this alone unless you really want the hostname to change.")
pc.defineParameter("networkManagerHost", "Name of network manager node",
                   portal.ParameterType.STRING, "nm",advanced=True,
                   longDescription="The short name of the network manager (neutron) node.  You shold leave this alone unless you really want the hostname to change.")
pc.defineParameter("computeHostBaseName", "Base name of compute node(s)",
                   portal.ParameterType.STRING, "cp", advanced=True,
                   longDescription="The base string of the short name of the compute nodes (node names will look like cp-1, cp-2, ... or cp-s2-1, cp-s2-2, ... (for nodes at Site 2, if you request those)).  You shold leave this alone unless you really want the hostname to change.")
#pc.defineParameter("blockStorageHost", "Name of block storage server node",
#                   portal.ParameterType.STRING, "ctl")
#pc.defineParameter("objectStorageHost", "Name of object storage server node",
#                   portal.ParameterType.STRING, "ctl")
#pc.defineParameter("blockStorageNodeCount", "Number of block storage nodes",
#                   portal.ParameterType.INTEGER, 0)
#pc.defineParameter("objectStorageNodeCount", "Number of object storage nodes",
#                   portal.ParameterType.STRING, 0)
###pc.defineParameter("adminPass","The OpenStack admin password",
###                   portal.ParameterType.STRING,"",advanced=True,
###                   longDescription="You should choose a unique password at least 8 characters long, with uppercase and lowercase characters, numbers, and special characters.  CAREFULLY NOTE this password; but if you forget, you can find it later on the experiment status page.  If you don't provide a password, it will be randomly generated, and you can find it on your experiment status page after you instantiate the profile.")

#
# Get any input parameter values that will override our defaults.
#
params = pc.bindParameters()

#
# Verify our parameters and throw errors.
#
###
### XXX: get rid of custom root password support for now
###
###if len(params.adminPass) > 0:
###    pwel = []
###    up = low = num = none = total = 0
###    for ch in params.adminPass:
###        if ch.isupper(): up += 1
###        if ch.islower(): low += 1
###        if ch.isdigit(): num += 1
###        if not ch.isalpha(): none += 1
###        total += 1
###        pass
###    if total < 8:
###        pwel.append("Your password should be at least 8 characters in length!")
###    if up == 0 or low == 0 or num == 0 or none == 0:
###        pwel.append("Your password should contain a mix of lowercase, uppercase, digits, and non-alphanumeric characters!")
###    if params.adminPass == "N!ceD3m0":
###        pwel.append("This password cannot be used.")
###    for err in pwel:
###        pc.reportError(portal.ParameterError(err,['adminPass']))
###        pass
###    pass
###elif False:
####    pc.reportError(portal.ParameterError("You cannot set a null password!",
####                                         ['adminPass']))
###    # Generate a random password that conforms to the above requirements.
###    # We only generate passwds with easy nonalpha chars, but we accept any
###    # nonalpha char to satisfy the requirements...
###    nonalphaChars = [33,35,36,37,38,40,41,42,43,64,94]
###    upperChars = range(65,90)
###    lowerChars = range(97,122)
###    decChars = range(48,57)
###    random.shuffle(nonalphaChars)
###    random.shuffle(upperChars)
###    random.shuffle(lowerChars)
###    random.shuffle(decChars)
    
###    passwdList = [nonalphaChars[0],nonalphaChars[1],upperChars[0],upperChars[1],
###                  lowerChars[0],lowerChars[1],decChars[0],decChars[1]]
###    random.shuffle(passwdList)
###    params.adminPass = ''
###    for i in passwdList:
###        params.adminPass += chr(i)
###        pass
###    pass
###else:
###    #
###    # For now, let Cloudlab generate the random password for us; this will
###    # eventually change to the above code.
###    #
###    pass

if params.computeNodeCount > 8:
    perr = portal.ParameterWarning("Are you creating a real cloud?  Otherwise, do you really need more than 8 compute nodes?  Think of your fellow users scrambling to get nodes :).",['computeNodeCount'])
    pc.reportWarning(perr)
    pass
if params.computeNodeCountSite2 > 8:
    perr = portal.ParameterWarning("Are you creating a real cloud?  Otherwise, do you really need more than 8 compute nodes?  Think of your fellow users scrambling to get nodes :).",['computeNodeCountSite2'])
    pc.reportWarning(perr)
    pass
if params.computeNodeCountSite2 > 0 and not params.multiplexFlatLans:
    perr = portal.ParameterError("If you request nodes at Site 2, you must enable multiplexing for flat lans!",['computeNodeCountSite2','multiplexFlatLans'])
    pc.reportError(perr)
    pass

if params.fromScratch and not params.doAptInstall:
    perr = portal.ParameterError("You cannot start from a bare image and choose not to install any OpenStack packages!",['fromScratch','doAptInstall'])
    pc.reportError(perr)
    pass
if params.doAptUpgrade and not params.doAptInstall:
    perr = portal.ParameterWarning("If you disable package installation, and request package upgrades, nothing will happen; you'll have to comb through the setup script logfiles to see what packages would have been upgraded.",['doAptUpgrade','doAptInstall'])
    pc.reportWarning(perr)
    pass

if params.publicIPCount > 16:
    perr = portal.ParameterError("You cannot request more than 16 public IP addresses, at least not without creating your own modified version of this profile!",['publicIPCount'])
    pc.reportError(perr)
    pass
if (params.vlanDataLanCount + params.vxlanDataLanCount \
    + params.greDataLanCount + params.flatDataLanCount) \
    > (params.publicIPCount - 1):
    perr = portal.ParameterWarning("You did not request enough public IPs to cover all your data networks and still leave you at least one floating IP; you may want to read this parameter's help documentation and change your parameters!",['publicIPCount'])
    pc.reportWarning(perr)
    pass

if params.vlanDataLanCount > 0 and params.multiplexFlatLans:
    perr = portal.ParameterError("You cannot specify vlanDataLanCount > 0 and multiplexFlatLans == True !",['vlanDataLanCount','multiplexFlatLans'])
    pc.reportError(perr)
    pass

if params.greDataLanCount > 0 and params.flatDataLanCount < 1:
    perr = portal.ParameterError("You must specifiy at least one flat data network to request one or more GRE data networks!",['greDataLanCount','flatDataLanCount'])
    pc.reportError(perr)
    pass
if params.vxlanDataLanCount > 0 and params.flatDataLanCount < 1:
    perr = portal.ParameterError("You must specifiy at least one flat data network to request one or more VXLAN data networks!",['vxlanDataLanCount','flatDataLanCount'])
    pc.reportError(perr)
    pass

if params.computeNodeCountSite2 > 0 and params.ipAllocationStrategy != "script":
    # or params.computeNodeCountSite3 > 0)
    badpl = ['ipAllocationStrategy']
    if params.computeNodeCountSite2 > 0:
        badpl.append('computeNodeCountSite2')
#    if params.computeNodeCountSite3 > 0:
#        badpl.append('computeNodeCountSite3')
    perr = portal.ParameterError("You must choose an ipAllocationStrategy of 'script' when including compute nodes at multiple sites!",
                                   badpl)
    pc.reportError(perr)
    params.ipAllocationStrategy = "script"
    pass

if params.ipAllocationStrategy == 'script':
    generateIPs = True
else:
    generateIPs = False
    pass

#
# Give the library a chance to return nice JSON-formatted exception(s) and/or
# warnings; this might sys.exit().
#
pc.verifyParameters()

detailedParamAutoDocs = ''
for param in pc._parameterOrder:
    if not pc._parameters.has_key(param):
        continue
    detailedParamAutoDocs += \
      """
  - *%s*

    %s
    (default value: *%s*)
      """ % (pc._parameters[param]['description'],pc._parameters[param]['longDescription'],pc._parameters[param]['defaultValue'])
    pass

tourDescription = \
  "This profile provides a highly-configurable OpenStack instance with a controller, network manager, and one or more compute nodes (potentially at multiple Cloudlab sites). This profile runs x86 or ARM64 nodes. It sets up OpenStack Liberty, Kilo, or Juno (on Ubuntu 15.10, 15.04, or 14.10) according to your choice, and configures all OpenStack services, pulls in some VM disk images, and creates basic networks accessible via floating IPs.  You'll be able to create instances and access them over the Internet in just a few minutes. When you click the Instantiate button, you'll be presented with a list of parameters that you can change to control what your OpenStack instance will look like; **carefully** read the parameter documentation on that page (or in the Instructions) to understand the various features available to you."

###if not params.adminPass or len(params.adminPass) == 0:
passwdHelp = "Your OpenStack admin and instance VM password is randomly-generated by Cloudlab, and it is: `{password-adminPass}` ."
###else:
###    passwdHelp = "Your OpenStack dashboard and instance VM password is `the one you specified in parameter selection`; hopefully you memorized or memoized it!"
###    pass
passwdHelp += "  When logging in to the Dashboard, use the `admin` user; when logging into instance VMs, use the `ubuntu` user."

tourInstructions = \
  """
### Basic Instructions
Once your experiment nodes have booted, and this profile's configuration scripts have finished configuring OpenStack inside your experiment, you'll be able to visit [the OpenStack Dashboard WWW interface](http://{host-%s}/horizon/auth/login/?next=/horizon/project/instances/) (approx. 5-15 minutes).  %s

Please wait to login to the OpenStack dashboard until the setup scripts have completed (we've seen Dashboard issues with content not appearing if you login before configuration is complete).  There are multiple ways to determine if the scripts have finished:
  - First, you can watch the experiment status page: the overall State will say \"booted (startup services are still running)\" to indicate that the nodes have booted up, but the setup scripts are still running.
  - Second, the Topology View will show you, for each node, the status of the startup command on each node (the startup command kicks off the setup scripts on each node).  Once the startup command has finished on each node, the overall State field will change to \"ready\".  If any of the startup scripts fail, you can mouse over the failed node in the topology viewer for the status code.
  - Finally, the profile configuration scripts also send you two emails: once to notify you that controller setup has started, and a second to notify you that setup has completed.  Once you receive the second email, you can login to the Openstack Dashboard and begin your work.

**NOTE:** If the web interface rejects your password or gives another error, the scripts might simply need more time to set up the backend. Wait a few minutes and try again.  If you don't receive any email notifications, you can SSH to the 'ctl' node, become root, and check the primary setup script's logfile (/root/setup/setup-controller.log).  If near the bottom there's a line that includes 'Your OpenStack instance has completed setup'), the scripts have finished, and it's safe to login to the Dashboard.

If you need to run the OpenStack CLI tools, or your own scripts that use the OpenStack APIs, you'll find authentication credentials in /root/setup/admin-openrc.sh .  Be aware that the username in this file is `adminapi`, not `admin`; this is an artifact of the days when the profile used to allow you to customize the admin password (it was necessary because the nodes did not have the plaintext password, but only the hash).

*Do not* add any VMs on the `ext-net` network; instead, give them floating IP addresses from the pool this profile requests on your behalf (and increase the size of that pool when you instantiate by changing the `Number of public IP addresses` parameter).  If you try to use any public IP addresses on the `ext-net` network that are not part of your experiment (i.e., any that are not either the control network public IPs for the physical machines, or the public IPs used as floating IPs), those packets will be blocked, and you will be confused.

The profile's setup scripts are automatically installed on each node in `/tmp/setup` .  They execute as `root`, and keep state and downloaded files in `/root/setup/`.  More importantly, they write copious logfiles in that directory; so if you think there's a problem with the configuration, you could take a quick look through these logs --- especially `setup-controller.log` on the `ctl` node.


### Detailed Parameter Documentation
%s
""" % (params.controllerHost,passwdHelp,detailedParamAutoDocs)

#
# Setup the Tour info with the above description and instructions.
#  
tour = IG.Tour()
tour.Description(IG.Tour.TEXT,tourDescription)
tour.Instructions(IG.Tour.MARKDOWN,tourInstructions)
rspec.addTour(tour)

#
# Ok, get down to business -- we are going to create CloudLab LANs to be used as
# (openstack networks), based on user's parameters.  We might also generate IP
# addresses for the nodes, so set up some quick, brutally stupid IP address
# generation for each LAN.
#
flatlanstrs = {}
vlanstrs = {}
ipdb = {}
if params.managementLanType == 'flat':
    ipdb['mgmt-lan'] = { 'base':'192.168','netmask':'255.255.0.0','values':[-1,-1,0,0] }
    pass
dataOffset = 10
ipSubnetsUsed = 0
for i in range(1,params.flatDataLanCount + 1):
    dlanstr = "%s-%d" % ('flat-lan',i)
    ipdb[dlanstr] = { 'base' : '10.%d' % (i + dataOffset + ipSubnetsUsed,),'netmask' : '255.255.0.0',
                      'values' : [-1,-1,10,0] }
    flatlanstrs[i] = dlanstr
    ipSubnetsUsed += 1
    pass
for i in range(1,params.vlanDataLanCount + 1):
    dlanstr = "%s-%d" % ('vlan-lan-',i)
    ipdb[dlanstr] = { 'base' : '10.%d' % (i + dataOffset + ipSubnetsUsed,),'netmask' : '255.255.0.0',
                      'values' : [-1,-1,10,0] }
    vlanstrs[i] = dlanstr
    ipSubnetsUsed += 1
    pass
for i in range(1,params.vxlanDataLanCount + 1):
    dlanstr = "%s-%d" % ('vxlan-lan',i)
    ipdb[dlanstr] = { 'base' : '10.%d' % (i + dataOffset + ipSubnetsUsed,),'netmask' : '255.255.0.0',
                      'values' : [-1,-1,10,0] }
    ipSubnetsUsed += 1
    pass

# Assume a /16 for every network
def get_next_ipaddr(lan):
    ipaddr = ipdb[lan]['base']
    backpart = ''

    idxlist = range(1,4)
    idxlist.reverse()
    didinc = False
    for i in idxlist:
        if ipdb[lan]['values'][i] is -1:
            break
        if not didinc:
            didinc = True
            ipdb[lan]['values'][i] += 1
            if ipdb[lan]['values'][i] > 254:
                if ipdb[lan]['values'][i-1] is -1:
                    return ''
                else:
                    ipdb[lan]['values'][i-1] += 1
                    pass
                pass
            pass
        backpart = '.' + str(ipdb[lan]['values'][i]) + backpart
        pass

    return ipaddr + backpart

def get_netmask(lan):
    return ipdb[lan]['netmask']

#
# Ok, actually build the data LANs now...
#
flatlans = {}
vlans = {}
alllans = []

for i in range(1,params.flatDataLanCount + 1):
    datalan = RSpec.LAN(flatlanstrs[i])
    if params.osLinkSpeed > 0:
        datalan.bandwidth = int(params.osLinkSpeed)
        pass
    if params.multiplexFlatLans:
        datalan.link_multiplexing = True
        datalan.best_effort = True
        # Need this cause LAN() sets the link type to lan, not sure why.
        datalan.type = "vlan"
        pass
    flatlans[i] = datalan
    alllans.append(datalan)
    pass
for i in range(1,params.vlanDataLanCount + 1):
    datalan = RSpec.LAN("vlan-lan-%d" % (i,))
    if params.osLinkSpeed > 0:
        datalan.bandwidth = int(params.osLinkSpeed)
        pass
    datalan.link_multiplexing = True
    datalan.best_effort = True
    # Need this cause LAN() sets the link type to lan, not sure why.
    datalan.type = "vlan"
    vlans[i] = datalan
    alllans.append(datalan)
    pass

#
# Ok, also build a management LAN if requested.  If we build one, it runs over
# a dedicated experiment interface, not the Cloudlab public control network.
#
if params.managementLanType == 'flat':
    mgmtlan = RSpec.LAN('mgmt-lan')
    if params.multiplexFlatLans:
        mgmtlan.link_multiplexing = True
        mgmtlan.best_effort = True
        # Need this cause LAN() sets the link type to lan, not sure why.
        mgmtlan.type = "vlan"
        pass
    pass
else:
    mgmtlan = None
    pass

#
# Construct the disk image URNs we're going to set the various nodes to load.
#
if params.release == "juno":
    image_os = 'UBUNTU14-10-64'
elif params.release == "kilo":
    image_os = 'UBUNTU15-04-64'
else:
    image_os = 'UBUNTU15-10-64'
    pass
if params.fromScratch:
    image_tag_cn = 'STD'
    image_tag_nm = 'STD'
    image_tag_cp = 'STD'
else:
    image_tag_cn = 'OSCN'
    image_tag_nm = 'OSNM'
    image_tag_cp = 'OSCP'
    pass

#
# Add the controller node.
#
controller = RSpec.RawPC(params.controllerHost)
if params.osNodeType:
    controller.hardware_type = params.osNodeType
    pass
controller.Site("1")
#controller.disk_image = "urn:publicid:IDN+utah.cloudlab.us+image+emulab-ops//%s-%s" % (image_os,image_tag_cn)
#controller.disk_image = "urn:publicid:IDN+utah.cloudlab.us+image+aerotest-PG0:custom_image"

i = 0
for datalan in alllans:
    iface = controller.addInterface("if%d" % (i,))
    datalan.addInterface(iface)
    if generateIPs:
        iface.addAddress(RSpec.IPv4Address(get_next_ipaddr(datalan.client_id),
                                           get_netmask(datalan.client_id)))
        pass
    i += 1
    pass
if mgmtlan:
    iface = controller.addInterface("ifM")
    mgmtlan.addInterface(iface)
    if generateIPs:
        iface.addAddress(RSpec.IPv4Address(get_next_ipaddr(mgmtlan.client_id),
                                           get_netmask(mgmtlan.client_id)))
        pass
    pass
controller.addService(RSpec.Install(url=TBURL, path="/tmp"))
controller.addService(RSpec.Execute(shell="sh",command=TBCMD))
rspec.addResource(controller)

#
# Add the network manager (neutron) node.
#
networkManager = RSpec.RawPC(params.networkManagerHost)
if params.osNodeType:
    networkManager.hardware_type = params.osNodeType
    pass
networkManager.Site("1")
networkManager.disk_image = "urn:publicid:IDN+utah.cloudlab.us+image+emulab-ops//%s-%s" % (image_os,image_tag_nm)
i = 0
for datalan in alllans:
    iface = networkManager.addInterface("if%d" % (i,))
    datalan.addInterface(iface)
    if generateIPs:
        iface.addAddress(RSpec.IPv4Address(get_next_ipaddr(datalan.client_id),
                                           get_netmask(datalan.client_id)))
        pass
    i += 1
    pass
if mgmtlan:
    iface = networkManager.addInterface("ifM")
    mgmtlan.addInterface(iface)
    if generateIPs:
        iface.addAddress(RSpec.IPv4Address(get_next_ipaddr(mgmtlan.client_id),
                                           get_netmask(mgmtlan.client_id)))
        pass
    pass
networkManager.addService(RSpec.Install(url=TBURL, path="/tmp"))
networkManager.addService(RSpec.Execute(shell="sh",command=TBCMD))
rspec.addResource(networkManager)

#
# Add the compute nodes.  First we generate names for each node at each site;
# then we create those nodes at each site.
#
computeNodeNamesBySite = {}
computeNodeList = ""
for i in range(1,params.computeNodeCount + 1):
    cpname = "%s-%d" % (params.computeHostBaseName,i)
    if not computeNodeNamesBySite.has_key(1):
        computeNodeNamesBySite[1] = []
        pass
    computeNodeNamesBySite[1].append(cpname)
    pass
for i in range(1,params.computeNodeCountSite2 + 1):
    cpname = "%s-s2-%d" % (params.computeHostBaseName,i)
    if not computeNodeNamesBySite.has_key(2):
        computeNodeNamesBySite[2] = []
        pass
    computeNodeNamesBySite[2].append(cpname)
    pass

for (siteNumber,cpnameList) in computeNodeNamesBySite.iteritems():
    for cpname in cpnameList:
        cpnode = RSpec.RawPC(cpname)
        if params.osNodeType:
            cpnode.hardware_type = params.osNodeType
        pass
        cpnode.Site(str(siteNumber))
        cpnode.disk_image = "urn:publicid:IDN+utah.cloudlab.us+image+emulab-ops//%s-%s" % (image_os,image_tag_cp)
        i = 0
        for datalan in alllans:
            iface = cpnode.addInterface("if%d" % (i,))
            datalan.addInterface(iface)
            if generateIPs:
                iface.addAddress(RSpec.IPv4Address(get_next_ipaddr(datalan.client_id),
                                                   get_netmask(datalan.client_id)))
                pass
            i += 1
            pass
        if mgmtlan:
            iface = cpnode.addInterface("ifM")
            mgmtlan.addInterface(iface)
            if generateIPs:
                iface.addAddress(RSpec.IPv4Address(get_next_ipaddr(mgmtlan.client_id),
                                                   get_netmask(mgmtlan.client_id)))
                pass
            pass
        cpnode.addService(RSpec.Install(url=TBURL, path="/tmp"))
        cpnode.addService(RSpec.Execute(shell="sh",command=TBCMD))
        rspec.addResource(cpnode)
        computeNodeList += cpname + ' '
        pass
    pass

for datalan in alllans:
    rspec.addResource(datalan)
if mgmtlan:
    rspec.addResource(mgmtlan)
    pass

#
# Grab a few public IP addresses.
#
apool = IG.AddressPool("nm",params.publicIPCount)
rspec.addResource(apool)

class EmulabEncrypt(RSpec.Resource):
    def _write(self, root):

#        el = ET.SubElement(root,"%sencrypt" % (ns,),attrib={'name':'adminPass'})
#        el.text = params.adminPass
        el = ET.SubElement(root,"%spassword" % (ns,),attrib={'name':'adminPass'})
        pass
    pass

#
# Add our parameters to the request so we can get their values to our nodes.
# The nodes download the manifest(s), and the setup scripts read the parameter
# values when they run.
#
class Parameters(RSpec.Resource):
    def _write(self, root):
        paramXML = "%sparameter" % (ns,)
        
        el = ET.SubElement(root,"%sprofile_parameters" % (ns,))

        param = ET.SubElement(el,paramXML)
        param.text = 'CONTROLLER="%s"' % (params.controllerHost,)
        param = ET.SubElement(el,paramXML)
        param.text = 'NETWORKMANAGER="%s"' % (params.networkManagerHost,)
        param = ET.SubElement(el,paramXML)
        param.text = 'COMPUTENODES="%s"' % (computeNodeList,)
#        param = ET.SubElement(el,paramXML)
#        param.text = 'STORAGEHOST="%s"' % (params.blockStorageHost,)
#        param = ET.SubElement(el,paramXML)
#        param.text = 'OBJECTHOST="%s"' % (params.objectStorageHost,)
        param = ET.SubElement(el,paramXML)
        param.text = 'DATALANS="%s"' % (' '.join(map(lambda(lan): lan.client_id,alllans)))
        param = ET.SubElement(el,paramXML)
        param.text = 'DATAFLATLANS="%s"' % (' '.join(map(lambda(i): flatlans[i].client_id,range(1,params.flatDataLanCount + 1))))
        param = ET.SubElement(el,paramXML)
        param.text = 'DATAVLANS="%s"' % (' '.join(map(lambda(i): vlans[i].client_id,range(1,params.vlanDataLanCount + 1))))
        param = ET.SubElement(el,paramXML)
        param.text = 'DATAVXLANS="%d"' % (params.vxlanDataLanCount,)
        param = ET.SubElement(el,paramXML)
        param.text = 'DATATUNNELS=%d' % (params.greDataLanCount,)
        param = ET.SubElement(el,paramXML)
        if mgmtlan:
            param.text = 'MGMTLAN="%s"' % (mgmtlan.client_id,)
        else:
            param.text = 'MGMTLAN=""'
            pass
#        param = ET.SubElement(el,paramXML)
#        param.text = 'STORAGEHOST="%s"' % (params.blockStorageHost,)
        param = ET.SubElement(el,paramXML)
        param.text = 'DO_APT_INSTALL=%d' % (int(params.doAptInstall),)
        param = ET.SubElement(el,paramXML)
        param.text = 'DO_APT_UPGRADE=%d' % (int(params.doAptUpgrade),)
        param = ET.SubElement(el,paramXML)
        param.text = 'DO_APT_UPDATE=%d' % (int(params.doAptUpdate),)

###        if params.adminPass and len(params.adminPass) > 0:
###            random.seed()
###            salt = ""
###            schars = [46,47]
###            schars.extend(range(48,58))
###            schars.extend(range(97,123))
###            schars.extend(range(65,91))
###            for i in random.sample(schars,16):
###                salt += chr(i)
###                pass
###            hpass = crypt.crypt(params.adminPass,'$6$%s' % (salt,))
###            param = ET.SubElement(el,paramXML)
###            param.text = "ADMIN_PASS_HASH='%s'" % (hpass,)
###            pass
###        else:
        param = ET.SubElement(el,paramXML)
        param.text = "ADMIN_PASS_HASH=''"
###            pass
        
        param = ET.SubElement(el,paramXML)
        param.text = "ENABLE_NEW_SERIAL_SUPPORT=%d" % (int(params.enableNewSerialSupport))
        
        param = ET.SubElement(el,paramXML)
        param.text = "DISABLE_SECURITY_GROUPS=%d" % (int(params.disableSecurityGroups))
        
        param = ET.SubElement(el,paramXML)
        param.text = "DEFAULT_SECGROUP_ENABLE_SSH_ICMP=%d" % (int(params.enableInboundSshAndIcmp))
        
        param = ET.SubElement(el,paramXML)
        param.text = "CEILOMETER_USE_MONGODB=%d" % (int(params.ceilometerUseMongoDB))
        
        param = ET.SubElement(el,paramXML)
        param.text = "VERBOSE_LOGGING=\"%s\"" % (str(bool(params.enableVerboseLogging)))
        param = ET.SubElement(el,paramXML)
        param.text = "DEBUG_LOGGING=\"%s\"" % (str(bool(params.enableDebugLogging)))
        
        param = ET.SubElement(el,paramXML)
        param.text = "TOKENTIMEOUT=%d" % (int(params.tokenTimeout))
        param = ET.SubElement(el,paramXML)
        param.text = "SESSIONTIMEOUT=%d" % (int(params.sessionTimeout))
        
        if params.keystoneVersion > 0:
            param = ET.SubElement(el,paramXML)
            param.text = "KEYSTONEAPIVERSION=%d" % (int(params.keystoneVersion))
            pass
        
        param = ET.SubElement(el,paramXML)
        param.text = "KEYSTONEUSEMEMCACHE=%d" % (int(bool(params.keystoneUseMemcache)))
        
        if params.keystoneUseWSGI == 0:
            param = ET.SubElement(el,paramXML)
            param.text = "KEYSTONEUSEWSGI=0"
        elif params.keystoneUseWSGI == 1:
            param = ET.SubElement(el,paramXML)
            param.text = "KEYSTONEUSEWSGI=1"
        else:
            pass
        
        param = ET.SubElement(el,paramXML)
        param.text = "QUOTASOFF=%d" % (int(bool(params.quotasOff)))

        return el
    pass

parameters = Parameters()
rspec.addResource(parameters)

###if not params.adminPass or len(params.adminPass) == 0:
if True:
    stuffToEncrypt = EmulabEncrypt()
    rspec.addResource(stuffToEncrypt)
    pass

pc.printRequestRSpec(rspec)

-------

[SOL Session operational.  Use ~? for help]
[1339043.872573] mlx4_en: eth1: Link Up
Fri Feb 26 17:09:38 MST 2016: pxewait returns: mfs:/tftpboot/frisbee
continuing with frisbee...
Fri Feb 26 17:09:38 MST 2016: rc.frisbee starting
Authenticated IPOD enabled from 128.110.156.4/255.255.255.255
Loading image #0
  LOADINFO="ADDR= PART=1 PARTOS=Linux SERVER=128.110.156.4 OSVERSION=15.10 DISK=da0 ZFILL=0 ACPI=unknown MBRVERS=103 ASF=unknown PREPARE=0 NOCLFLUSH=unknown DOM0MEM=1024M IMAGEID=aerotest-PG0,aerotest-PG0,custom_image:0 IMAGEMTIME=1456460386 IMAGECHUNKS=1681 IMAGELOW=0 IMAGEHIGH=33554431 IMAGESSIZE=512 IMAGERELOC=0 LDISK=sda"
net.core.rmem_max = 1048576
net.core.wmem_max = 1048576
Installing GPT version 3 ...
[1339057.039407]  sda: sda1 sda15
[1339058.094136]  sda: sda1 sda15
Fri Feb 26 17:09:56 MST 2016: Running /usr/local/etc/emulab/frisbee -f -S 128.110.156.4 -M 64013 -k 1024   -s 1 -D 131 -B 30 -F aerotest-PG0/custom_image:0 /dev/sda
aerotest-PG0/custom_image:0: address: 235.4.216.106:21344, server: 128.110.156.4
Maximum socket buffer size of 1048576 bytes
Bound to port 21344
Using Multicast 235.4.216.106
Joined the team after 0 sec. ID is 1636557642. File is 1681 chunks (1762656256 bytes)
...................................................................   6   1614
...................................................................  12   1547
...................................................................  18   1481
.................s.................................................  24   1415
...................................................................  31   1349
...................................................................  37   1282
.......................................s.........................s.  43   1217
...................................................................  49   1150
...................................................................  55   1083
...................................................................  61   1017
...................................................................  67    950
...................................................................  73    884
...................................................................  80    817
...................................................................  86    750
..............s....................................................  92    684
...................................................................  98    617
...................................s............................... 104    551
................................................................... 110    484
................................................................... 116    418
....................................ss............................. 122    353
................................................................... 128    287
..........................ss....................................... 134    222
................................................................... 141    155
................................................................... 147     89
................................................................... 153     22
.......................
Client 1636557642 Performance:
  runtime:                156.084 sec
  start delay:            0.000 sec
  real data written:      7848873984 (50313294 Bps)
  effective data written: 17179869184 (110127366 Bps)
Client 1636557642 Params[1339217.293587] EXT4-fs (sda1): mounted filesystem with ordered data mode. Opts: (null)
:
  chunk/block size:     1024/1024
  chunk buffers:        1681
  disk buffering (MB):  62330
  sockbuf size (KB):    1024
  readahead/inprogress: 2/8
  recv timo/count:    delay:     1000000
  writer idle delay:    1000
  randomize requests:   1
Client 1636557642 Stats:
  net thread idle/blocked:        0/0
  decompress thread idle/blocked: 1619/0
  disk thread idle:        5396
  join/request msgs:       1/1681
  dupblocks(chunk done):   0
  dupblocks(in progress):  0
  partial requests/blocks: 0/0
  re-requests:             0
  full chunk re-requests:  0
  partially-filled drops:  0

Left the team after 156 seconds on the field!
Set partition 1 type to 0x0083
Wrote 17179869184 bytes (7848873984 actual)
0 5396 32895
Fri Feb 26 17:12:35 MST 2016: Image #0 load complete
Fri Feb 26 17:12:35 MST 2016: Frisbee run(s) finished
Fri Feb 26 17:12:35 MST 2016: Running slicefix
Fri Feb 26 17:12:35 MST 2016: Adjusting slice-related files on sda slice 1
*** sda1:
  setting new root FS UUID
tune2fs 1.42.9 (4-Feb-2014)
e2fsck 1.42.9 (4-Feb-2014)
/dev/sda1: clean, 155741/1048576 files, 1905143/4194304 blocks
  fixing Linux root partition sda1
  updating /etc/fstab
  localizing ...
  updating /root/.ssh/authorized_keys
  updating /etc/ntp.conf
  moving ntp.drift to /var/lib/ntp...
Fri Feb 26 17:12:35 MST 2016: slicefix run(s) done
Fri Feb 26 17:12:37 MST 2016: Waiting for server to reboot us ...
[1339219.697986] IPOD: got type=6, code=6, iplen=666, host=128.110.156.4
[1339219.775288] IPOD: reboot forced by 128.110.156.4...
[1339219.835898] CPU1: stopping
[1339219.870428] CPU: 1 PID: 0 Comm: swapper/1 Tainted: G           OX 3.13.0-40-generic #69-Ubuntu
[1339219.975891] Call trace:
[1339220.007195] [] dump_backtrace+0x0/0x16c
[1339220.074058] [] show_stack+0x10/0x1c
[1339220.136652] [] dump_stack+0x74/0x94
[1339220.199347] [] handle_IPI+0x120/0x138
[1339220.264024] [] gic_handle_irq+0x74/0x7c
[1339220.330789] Exception stack(0xffffffcfbc8fbe20 to 0xffffffcfbc8fbf40)
[1339220.410073] be20: bc8f8000 ffffffcf bc8f8000 ffffffcf bc8fbf60 ffffffcf 00085600 ffffffc0
[1339220.510219] be40: 006a2a54 00000000 00000000 00000000 fff7e834 ffffffcf 07fb078c 00000001
[1339220.610367] be60: 008f5ec0 ffffffc0 00000010 00000000 67dbef80 0004c203 fff7edf0 ffffffcf
[1339220.710514] be80: bc8b7310 ffffffcf bc8fbd60 ffffffcf 07fb077d 00000001 00000000 00000000
[1339220.810661] bea0: 00000020 00000000 00000001 00000000 fff�

U-Boot 2013.04 (Mar 26 2015 - 11:31:01)

ProLiant m400 Server Cartridge - U02 (02/26/2015)
Copyright 2013 - 2015 Hewlett-Packard Development Company, L.P. 
Copyright 2000 - 2012 Wolfgang Denk, DENX Software Engineering, w...@denx.de


CPU0: APM ARM 64-bit Potenza Rev B0 2400MHz PCP 2400MHz
     32 KB ICACHE, 32 KB DCACHE
     SOC 2000MHz IOBAXI 400MHz AXI 250MHz AHB 200MHz GFC 125MHz
Boot from SPI-NOR
Slimpro FW: Ver: 2.3 (build 2015/09/09)
I2C:   ready
DRAM: PHY calibrating ... PHY calibrating ... PHY calibrating ... PHY calibrating ... ECC init ................ **************** ................ **************** ................ **************** ................ **************** ................ **************** ................ **************** ................ **************** ................ **************** ................ **************** ................ **************** ................ **************** ................ **************** ................ **************** ................ **************** ................ **************** ................ **************** ................ **************** ................ **************** ................ **************** ................ **************** ................ **************** ................ **************** ................ **************** ................ **************** ................ **************** ................ **************** ................ **************** ................ **************** ................ **************** ................ **************** ................ **************** ................ **************** ................                 64 GiB @ 1333MHz
relocation Address is: 0x4ffff27000
Using default environment

API sig @ 0x0000004ffdf17170
In:    serial
Out:   serial
Err:   serial
CPUs:  11111111
CPLD: 0B
PCIE3: (RC) X8 GEN-2 link up
  00:00.0     - 19aa:e008 - Bridge device
   01:00.0    - 15b3:1007 - Network controller
SF: Detected MX25L12805D with page size 64 KiB, total 16 MiB
SF: 16384 KiB MX25L12805D at 0:0 is now current device

SF: flash read success (19048 bytes @ 0xe0000)
.
SF: flash read success (65568 bytes @ 0xc0000)
Node Boot Start Time: 2016-02-27T00:13:08
Node Serial Number: CN7438V6K4
Cartridge Chassis Slot ID: 45
Cartridge Serial Number: CN7438V6K4
Chassis Serial Number: CN64250DEG
Chassis Asset Tag: 
Node UUID: D4D1C3EA-746B-525F-BFE8-7587797039B4
Product ID: 721717-B21
Timezone Name: America/Denver
SCSI:  Target spinup took 0 ms.
AHCI2 0001.0300 32 slots 2 ports 6 Gbps 0x3 impl SATA mode
flags: 64bit ncq pm only pmp fbss pio slum part ccc 
scanning bus for devices...
  Device 0: (4:0) Vendor: ATA      Prod.: XR0120GEBLT Rev: HPS4
            Type: Hard Disk
            Capacity: 114473.4 MB = 111.7 GB (234441648 x 512)
Found 1 device(s).
Boot: PXE, M.2
Net:   Mellanox ConnectX3 U-Boot driver version 1.1
Mellanox ConnectX3 Firmware Version 2.32.5330
Net:   NIC1 [PRIME], NIC2

Booting PXE
Requesting DHCP address via NIC1
BOOTP broadcast 1
DHCP client bound to address 128.110.152.90
Retrieving file: /tftpboot/pxelinux.cfg/D4D1C3EA-746B-525F-BFE8-7587797039B4
Using NIC1 device
TFTP from server 128.110.156.4; our IP address is 128.110.152.90; sending through gateway 128.110.152.1
Filename '/tftpboot/pxelinux.cfg/D4D1C3EA-746B-525F-BFE8-7587797039B4'.
Load address: 0x4000800000
Loading: *
TFTP error: 'File not found' (1)
Not retrying...
Retrieving file: /tftpboot/pxelinux.cfg/01-fc-15-b4-21-b1-a2
Using NIC1 device
TFTP from server 128.110.156.4; our IP address is 128.110.152.90; sending through gateway 128.110.152.1
Filename '/tftpboot/pxelinux.cfg/01-fc-15-b4-21-b1-a2'.
Load address: 0x4000800000
Loading: * #
270.5 KiB/s
done
Bytes transferred = 832 (340 hex)
Config file found
Emulab node boot
1: Local disk
2: Frisbee MFS
3: Wait for further instructions
4: Boot MFS to shell prompt
5: NFS-based MFS
Enter choice: 1: Local disk
PXE: executing localboot
295 bytes read in 51 ms (4.9 KiB/s)
## Executing script at 4004000000
12198464 bytes read in 374 ms (31.1 MiB/s)
26695571 bytes read in 751 ms (33.9 MiB/s)
## Booting kernel from Legacy Image at 4002000000 ...
   Image Name:   kernel 3.19.0-21-generic
   Created:      2016-02-03  17:20:12 UTC
   Image Type:   ARM Linux Kernel Image (uncompressed)
   Data Size:    12198400 Bytes = 11.6 MiB
   Load Address: 00080000
   Entry Point:  00080000
   Verifying Checksum ... OK
## Loading init Ramdisk from Legacy Image at 4005000000 ...
   Image Name:   ramdisk 3.19.0-21-generic
   Created:      2016-02-03  17:20:12 UTC
   Image Type:   ARM Linux RAMDisk Image (gzip compressed)
   Data Size:    26695507 Bytes = 25.5 MiB
   Load Address: 00000000
   Entry Point:  00000000
   Verifying Checksum ... OK
## Flattened Device Tree blob at 4003000000
   Booting using the fdt blob at 0x0000004003000000
   Loading Kernel Image ... OK
OK
   Loading Ramdisk to 4fee68a000, end 4feffff753 ... OK
   Loading Device Tree to 0000004000ff8000, end 0000004000fffa67 ... OK

Starting kernel ...

L3C: 8MB
[    0.000000] Booting Linux on physical CPU 0x0
[    0.000000] Initializing cgroup subsys cpuset
[    0.000000] Initializing cgroup subsys cpu
[    0.000000] Initializing cgroup subsys cpuacct
[    0.000000] Linux version 3.19.0-21-generic (buildd@beebe) (gcc version 4.9.2 (Ubuntu/Linaro 4.9.2-10ubuntu13) ) #21-Ubuntu SMP Sun Jun 14 18:34:06 UTC 2015 (Ubuntu 3.19.0-21.21-generic 3.19.8)
[    0.000000] CPU: AArch64 Processor [500f0001] revision 1
[    0.000000] Detected PIPT I-cache on CPU0
[    0.000000] efi: Getting EFI parameters from FDT:
[    0.000000] efi: UEFI not found.
[    0.000000] PERCPU: Embedded 14 pages/cpu @ffffffcffff48000 s19968 r8192 d29184 u57344
[    0.000000] Built 1 zonelists in Zone order, mobility grouping on.  Total pages: 16515072
[    0.000000] Kernel command line: console=ttyS0,9600n8r ro root=/dev/sda1
[    0.000000] log_buf_len individual max cpu contribution: 4096 bytes
[    0.000000] log_buf_len total cpu_extra contributions: 28672 bytes
[    0.000000] log_buf_len min size: 16384 bytes
[    0.000000] log_buf_len: 65536 bytes
[    0.000000] early log buf free: 14852(90%)
[    0.000000] PID hash table entries: 4096 (order: 3, 32768 bytes)
[    0.000000] Dentry cache hash table entries: 8388608 (order: 14, 67108864 bytes)
[    0.000000] Inode-cache hash table entries: 4194304 (order: 13, 33554432 bytes)
[    0.000000] Memory: 65922296K/67108864K available (7287K kernel code, 812K rwdata, 3180K rodata, 604K init, 743K bss, 1186568K reserved, 0K cma-reserved)
[    0.000000] Virtual kernel memory layout:
[    0.000000]     vmalloc : 0xffffff8000000000 - 0xffffffbdbfff0000   (   246 GB)
[    0.000000]     vmemmap : 0xffffffbdc0000000 - 0xffffffbfc0000000   (     8 GB maximum)
[    0.000000]               0xffffffbec0000000 - 0xffffffbf00000000   (  1024 MB actual)
[    0.000000]     PCI I/O : 0xffffffbffa000000 - 0xffffffbffb000000   (    16 MB)
[    0.000000]     fixed   : 0xffffffbffbdfd000 - 0xffffffbffbdff000   (     8 KB)
[    0.000000]     modules : 0xffffffbffc000000 - 0xffffffc000000000   (    64 MB)
[    0.000000]     memory  : 0xffffffc000000000 - 0xffffffd000000000   ( 65536 MB)
[    0.000000]       .init : 0xffffffc000abb000 - 0xffffffc000b52000   (   604 KB)
[    0.000000]       .text : 0xffffffc000080000 - 0xffffffc000abaf04   ( 10476 KB)
[    0.000000]       .data : 0xffffffc000b57000 - 0xffffffc000c22200   (   813 KB)
[    0.000000] SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=8, Nodes=1
[    0.000000] Hierarchical RCU implementation.
[    0.000000] RCU dyntick-idle grace-period acceleration is enabled.
[    0.000000] NR_IRQS:64 nr_irqs:64 0
[    0.000000] Architected cp15 timer(s) running at 50.00MHz (phys).
[    0.000002] sched_clock: 56 bits at 50MHz, resolution 20ns, wraps every 2748779069440ns
[    0.000074] Console: colour dummy device 80x25
[    0.000087] Calibrating delay loop (skipped), value calculated using timer frequency.. 100.00 BogoMIPS (lpj=500000)
[    0.000092] pid_max: default: 32768 minimum: 301
[    0.000121] Security Framework initialized
[    0.000157] AppArmor: AppArmor initialized
[    0.000160] Yama: becoming mindful.
[    0.000230] Mount-cache hash table entries: 131072 (order: 8, 1048576 bytes)
[    0.000237] Mountpoint-cache hash table entries: 131072 (order: 8, 1048576 bytes)
[    0.000745] Initializing cgroup subsys memory
[    0.000756] Initializing cgroup subsys devices
[    0.000760] Initializing cgroup subsys freezer
[    0.000764] Initializing cgroup subsys net_cls
[    0.000768] Initializing cgroup subsys blkio
[    0.000772] Initializing cgroup subsys perf_event
[    0.000775] Initializing cgroup subsys net_prio
[    0.000779] Initializing cgroup subsys hugetlb
[    0.000797] ftrace: allocating 26186 entries in 103 pages
[    0.012060] hw perfevents: enabled with arm/armv8-pmuv3 PMU driver, 5 counters available
[    0.012075] EFI services will not be available.
[    0.013110] CPU1: Booted secondary processor
[    0.013113] Detected PIPT I-cache on CPU1
[    0.013274] CPU2: Booted secondary processor
[    0.013277] Detected PIPT I-cache on CPU2
[    0.013426] CPU3: Booted secondary processor
[    0.013427] Detected PIPT I-cache on CPU3
[    0.013560] CPU4: Booted secondary processor
[    0.013563] Detected PIPT I-cache on CPU4
[    0.013708] CPU5: Booted secondary processor
[    0.013710] Detected PIPT I-cache on CPU5
[    0.013846] CPU6: Booted secondary processor
[    0.013848] Detected PIPT I-cache on CPU6
[    0.013989] CPU7: Booted secondary processor
[    0.013991] Detected PIPT I-cache on CPU7
[    0.014019] Brought up 8 CPUs
[    0.014033] SMP: Total of 8 processors activated.
[    0.014336] devtmpfs: initialized
[    0.014513] evm: security.selinux
[    0.014516] evm: security.SMACK64
[    0.014517] evm: security.SMACK64EXEC
[    0.014519] evm: security.SMACK64TRANSMUTE
[    0.014521] evm: security.SMACK64MMAP
[    0.014522] evm: security.ima
[    0.014524] evm: security.capability
[    0.014587] DMI not present or invalid.
[    0.016015] NET: Registered protocol family 16
[    0.041876] cpuidle: using governor ladder
[    0.063141] cpuidle: using governor menu
[    0.063168] vdso: 2 pages (1 code @ ffffffc000b5d000, 1 data @ ffffffc000b5c000)
[    0.063187] hw-breakpoint: found 4 breakpoint and 4 watchpoint registers.
[    0.063391] software IO TLB [mem 0x40ffc00000-0x4100000000] (4MB) mapped at [ffffffc0ffc00000-ffffffc0ffffffff]
[    0.063420] DMA: preallocated 256 KiB pool for atomic allocations
[    0.063476] Serial: AMBA PL011 UART driver
[    0.094105] vgaarb: loaded
[    0.094514] SCSI subsystem initialized
[    0.094655] usbcore: registered new interface driver usbfs
[    0.094671] usbcore: registered new interface driver hub
[    0.094706] usbcore: registered new device driver usb
[    0.095106] NetLabel: Initializing
[    0.095109] NetLabel:  domain hash size = 128
[    0.095111] NetLabel:  protocols = UNLABELED CIPSOv4
[    0.095132] NetLabel:  unlabeled traffic allowed by default
[    0.095341] XGene: PCIe MSI driver v0.1
[    0.095413] Switched to clocksource arch_sys_counter
[    0.107393] AppArmor: AppArmor Filesystem Enabled
[    0.110360] NET: Registered protocol family 2
[    0.110616] TCP established hash table entries: 524288 (order: 10, 4194304 bytes)
[    0.111940] TCP bind hash table entries: 65536 (order: 8, 1048576 bytes)
[    0.112306] TCP: Hash tables configured (established 524288 bind 65536)
[    0.112331] TCP: reno registered
[    0.112342] UDP hash table entries: 32768 (order: 8, 1048576 bytes)
[    0.112872] UDP-Lite hash table entries: 32768 (order: 8, 1048576 bytes)
[    0.113488] NET: Registered protocol family 1
[    0.113600] Trying to unpack rootfs image as initramfs...
[    0.660727] Freeing initrd memory: 26068K (ffffffcfee68a000 - ffffffcfeffff000)
[    0.660878] kvm [1]: Using HYP init bounce page @4fb2545000
[    0.660969] kvm [1]: interrupt-controller@780c0000 IRQ5
[    0.661067] kvm [1]: timer IRQ3
[    0.661077] kvm [1]: Hyp mode initialized successfully
[    0.661591] futex hash table entries: 2048 (order: 5, 131072 bytes)
[    0.661616] Initialise system trusted keyring
[    0.661672] audit: initializing netlink subsys (disabled)
[    0.661702] audit: type=2000 audit(0.650:1): initialized
[    0.661971] HugeTLB registered 2 MB page size, pre-allocated 0 pages
[    0.664352] zpool: loaded
[    0.664357] zbud: loaded
[    0.664608] VFS: Disk quotas dquot_6.5.2
[    0.664670] VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes)
[    0.665545] fuse init (API version 7.23)
[    0.665787] Key type big_key registered
[    0.666182] Key type asymmetric registered
[    0.666189] Asymmetric key parser 'x509' registered
[    0.666269] Block layer SCSI generic (bsg) driver version 0.4 loaded (major 252)
[    0.666334] io scheduler noop registered
[    0.666340] io scheduler deadline registered (default)
[    0.666397] io scheduler cfq registered
[    0.666652] pci_hotplug: PCI Hot Plug PCI Core version: 0.5
[    0.666663] pciehp: PCI Express Hot Plug Controller Driver version: 0.4
[    0.666711] PCI host bridge /soc/pcie@1f500000 ranges:
[    0.666716]   No bus range found for /soc/pcie@1f500000, using [bus 00-ff]
[    0.666724]   MEM 0xa130000000..0xa1afffffff -> 0x30000000
[    0.666753] xgene-pcie 1f500000.pcie: (rc) x8 gen-2 link up
[    0.666811] xgene-pcie 1f500000.pcie: PCI host bridge to bus 0000:00
[    0.666816] pci_bus 0000:00: root bus resource [bus 00-ff]
[    0.666821] pci_bus 0000:00: root bus resource [mem 0xa130000000-0xa1afffffff] (bus address [0x30000000-0xafffffff])
[    0.685542] pci 0000:00:00.0: BAR 15: assigned [mem 0xa130000000-0xa141ffffff 64bit pref]
[    0.685545] pci 0000:00:00.0: BAR 14: assigned [mem 0xa142000000-0xa1421fffff]
[    0.685820] pci 0000:01:00.0: BAR 2: assigned [mem 0xa130000000-0xa131ffffff 64bit pref]
[    0.686268] pci 0000:01:00.0: BAR 9: assigned [mem 0xa132000000-0xa141ffffff 64bit pref]
[    0.686451] pci 0000:01:00.0: BAR 0: assigned [mem 0xa142000000-0xa1420fffff 64bit]
[    0.686636] pci 0000:01:00.0: BAR 6: assigned [mem 0xa142100000-0xa1421fffff pref]
[    0.686677] pci 0000:00:00.0: PCI bridge to [bus 01]
[    0.686683] pci 0000:00:00.0:   bridge window [mem 0xa142000000-0xa1421fffff]
[    0.686687] pci 0000:00:00.0:   bridge window [mem 0xa130000000-0xa141ffffff 64bit pref]
[    0.686770] pcieport 0000:00:00.0: Signaling PME through PCIe PME interrupt
[    0.686774] pci 0000:01:00.0: Signaling PME through PCIe PME interrupt
[    0.687083] Serial: 8250/16550 driver, 32 ports, IRQ sharing enabled
[    0.689291] console [ttyS0] disabled
[    0.689322] 1c021000.serial: ttyS0 at MMIO 0x1c021000 (irq = 27, base_baud = 3125000) is a 16550A
[   10.986790] console [ttyS0] enabled
[   11.030924] brd: module loaded
[   11.068684] loop: module loaded
[   11.106709] libphy: Fixed MDIO Bus: probed
[   11.155852] tun: Universal TUN/TAP device driver, 1.6
[   11.216458] tun: (C) 1999-2004 Max Krasnyansky 
[   11.290729] PPP generic driver version 2.4.2
[   11.342072] ehci_hcd: USB 2.0 'Enhanced' Host Controller (EHCI) Driver
[   11.420429] ehci-pci: EHCI PCI platform driver
[   11.473753] ohci_hcd: USB 1.1 'Open' Host Controller (OHCI) Driver
[   11.547927] ohci-pci: OHCI PCI platform driver
[   11.601253] uhci_hcd: USB Universal Host Controller Interface driver
[   11.677638] mousedev: PS/2 mouse device common for all mice
[   11.744582] i2c /dev entries driver
[   11.786474] platform soc:gpio_poweroff: Driver poweroff-gpio requests probe deferral
[   11.879640] device-mapper: uevent: version 1.0.3
[   11.935161] device-mapper: ioctl: 4.29.0-ioctl (2014-10-28) initialised: dm-d...@redhat.com
[   12.036504] Driver 'mmcblk' needs updating - please use bus_type methods
[   12.117000] ledtrig-cpu: registered to indicate activity on CPUs
[   12.189230] TCP: cubic registered
[   12.229197] NET: Registered protocol family 10
[   12.282912] NET: Registered protocol family 17
[   12.336259] Key type dns_resolver registered
[   12.387702] Loading compiled-in X.509 certificates
[   12.446587] Loaded X.509 cert 'Magrathea: Glacier signing key: e15f231a166c64ce4f1e4e2628af477f6ef725aa'
[   12.560416] registered taskstats version 1
[   12.612654] Key type trusted registered
[   12.669293] Key type encrypted registered
[   12.717392] AppArmor: AppArmor sha1 policy hashing enabled
[   12.783218] ima: No TPM chip found, activating TPM-bypass!
[   12.849091] evm: HMAC attrs: 0x1
[   12.887967] platform soc:gpio_poweroff: Driver poweroff-gpio requests probe deferral
[   12.980945] /build/buildd/linux-3.19.0/drivers/rtc/hctosys.c: unable to open rtc device (rtc0)
[   13.084835] Freeing unused kernel memory: 604K (ffffffc000abb000 - ffffffc000b52000)
[   13.177795] Freeing alternatives memory: 12K (ffffffc000b52000 - ffffffc000b55000)
Loading, please [   13.288317] random: systemd-udevd urandom read with 1 bits of entropy available
wait...
startin[   13.393327] xgene-ahci 1a800000.sata: skip clock and PHY initialization
g version 225
[   13.396017] mlx4_core: Mellanox ConnectX core driver v2.2-1 (Feb, 2014)
[   13.396025] mlx4_core: Initializing 0000:01:00.0
[   13.624164] xgene-ahci 1a800000.sata: controller can't do NCQ, turning off CAP_NCQ
[   13.715031] xgene-ahci 1a800000.sata: controller can't do PMP, turning off CAP_PMP
[   13.805918] xgene-ahci 1a800000.sata: AHCI 0001.0300 32 slots 2 ports 6 Gbps 0x3 impl platform mode
[   13.914527] xgene-ahci 1a800000.sata: flags: 64bit sntf pm only fbs pio slum part ccc 
[   14.010008] scsi host0: ahci_platform
[   14.054127] scsi host1: ahci_platform
[   14.098139] ata1: SATA max UDMA/133 mmio [mem 0x1a800000-0x1a800fff] port 0x100 irq 38
[   14.193180] ata2: SATA max UDMA/133 mmio [mem 0x1a800000-0x1a800fff] port 0x180 irq 38
[   14.288343] platform soc:gpio_poweroff: Driver poweroff-gpio requests probe deferral
[   14.635424] ata2: SATA link down (SStatus 0 SControl 4300)
[   14.701264] ata1: SATA link up 6.0 Gbps (SStatus 133 SControl 4300)
[   14.776753] ata1.00: ATA-9: XR0120GEBLT, HPS4, max UDMA/133
[   14.843621] ata1.00: 234441648 sectors, multi 16: LBA48 NCQ (depth 0/32)
[   14.924262] ata1.00: configured for UDMA/133
[   14.975641] scsi 0:0:0:0: Direct-Access     ATA      XR0120GEBLT      HPS4 PQ: 0 ANSI: 5
[   15.073043] sd 0:0:0:0: [sda] 234441648 512-byte logical blocks: (120 GB/111 GiB)
[   15.073047] platform soc:gpio_poweroff: Driver poweroff-gpio requests probe deferral
[   15.073075] sd 0:0:0:0: Attached scsi generic 15.319455] sd 0:0:0:0: [sda] 4096-byte physical blocks
[   15.382235] sd 0:0:0:0: [sda] Write Protect is off
[   15.439743] sd 0:0:0:0: [sda] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA
[   15.550075]  sda: sda1 sda15
[   15.584996] sd 0:0:0:0: [sda] Attached SCSI disk
[   20.405535] mlx4_core 0000:01:00.0: PCIe BW is different than device's capability
[   20.495360] mlx4_core 0000:01:00.0: PCIe link speed is 5.0GT/s, device supports 8.0GT/s
[   20.591436] mlx4_core 0000:01:00.0: PCIe link width is x8, device supports x8
[   20.677671] mlx4_core 0000:01:00.0: Found no xgene,msi phandle
[   20.818343] platform soc:gpio_poweroff: Driver poweroff-gpio requests probe deferral
[   20.820302] pps_core: LinuxPPS API ver. 1 registered
[   20.820303] pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti 
[   20.822232] PTP clock support registered
[   20.831880] mlx4_en: Mellanox ConnectX HCA Ethernet driver v2.2-1 (Feb 2014)
[   20.832108] mlx4_en 0000:01:00.0: registered PHC clock
[   20.832157] mlx4_en 0000:01:00.0: Activating port:1
[   20.839090] mlx4_en: 0000:01:00.0: Port 1: Using 64 TX rings
[   20.839092] mlx4_en: 0000:01:00.0: Port 1: Using 4 RX rings
[   20.839094] mlx4_en: 0000:01:00.0: Port 1:   frag:0 - size:1518 prefix:0 stride:1536
[   20.839403] mlx4_en: 0000:01:00.0: Port 1: Initializing port
[   20.840881] mlx4_en 0000:01:00.0: Activating port:2
[   20.845870] mlx4_en: 0000:01:00.0: Port 2: Using 64 TX rings
[   20.845871] mlx4_en: 0000:01:00.0: Port 2: Using 4 RX rings
[   20.845873] mlx4_en: 0000:01:00.0: Port 2:   frag:0 - size:1518 prefix:0 stride:1536
[   20.846160] mlx4_en: 0000:01:00.0: Port 2: Initializing port
[   20.913425] mlx4_core 0000:01:00.0 enp1s0: renamed from eth0
[   21.834478] mlx4_en: enp1s0: Link Up
[   21.834633] mlx4_en: eth1: Link Up
[   22.165585] mlx4_core 0000:01:00.0 rename3: renamed from eth1
Begin: Loading essential drivers ... done.

Begin: Running /scripts/init-premou[  113.497262] EXT4-fs (sda1): mounted filesystem with ordered data mode. Opts: (null)
nt ... done.

Begin: Mounting root file system ... Begin: Running /scripts/local-top ... done.

Begin: Running /scripts/local-premount ... done.

Begin: Running /scripts/local-bottom ... done.

done.

Begin: Running /scripts/init-botto[  113.844057] systemd[1]: Failed to insert module 'autofs4': Function not implemented
m ... done.

[  113.945796] systemd[1]: Failed to insert module 'kdbus': Function not implemented
[  114.056085] systemd[1]: systemd 225 running in system mode. (+PAM +AUDIT +SELINUX +IMA +APPARMOR +SMACK +SYSVINIT +UTMP +LIBCRYPTSETUP +GCRYPT +GNUTLS +ACL +XZ -LZ4 +SECCOMP +BLKID -ELFUTILS +KMOD -IDN)
[  114.272315] systemd[1]: Detected architecture arm64.


Welcome to [[  114.334037] systemd[1]: Set hostname to .
1mUbuntu 15.10 [0m!



[  114.584916] systemd-sysv-generator[270]: Overwriting existing symlink /run/systemd/generator.late/umountiscsi.service with real service
-.mount[  114.813137] systemd[1]: ifup-wait-emulab-cnet.service: Cannot add dependency job, ignoring: Unit ifup-wait-emulab-cnet.service is masked.


[  114.967714] systemd[1]: nfs-blkmap.service: Cannot add dependency job, ignoring: Unit nfs-blkmap.service failed to load: No such file or directory.
[  115.129105] systemd[1]: gssproxy.service: Cannot add dependency job, ignoring: Unit gssproxy.service failed to load: No such file or directory.
[  115.283702] systemd[1]: display-manager.service: Cannot add dependency job, ignoring: Unit display-manager.service failed to load: No such file or directory.
[  115.453845] systemd[1]: Reached target User and Group Name Lookups.
[ [32m  OK   [0m[  115.529366] systemd[1]: Started Forward Password Requests to Wall Directory Watch.
] Reached target[  115.636951] systemd[1]: Created slice Root Slice.
 User and Group [  115.710087] systemd[1]: Listening on Device-mapper event daemon FIFOs.
Name Lookups.

[  115.805117] systemd[1]: Listening on LVM2 metadata daemon socket.
[ [32m  OK   [0m[  115.894952] systemd[1]: Listening on Journal Socket (/dev/log).
] Started Forwar[  115.982773] systemd[1]: Listening on Journal Audit Socket.
d Password Reque[  116.065199] systemd[1]: Listening on LVM2 poll daemon socket.
sts to Wall Dire[  116.150819] systemd[1]: Listening on fsck to fsckd communication Socket.
ctory Watch.

[[  116.247952] systemd[1]: Listening on /dev/initctl Compatibility Named Pipe.
[32m  OK   [0m][  116.348312] systemd[1]: Created slice User and Session Slice.
 Created slice R[  116.433860] systemd[1]: Listening on udev Kernel Socket.
oot Slice.

[ [[  116.514301] systemd[1]: Listening on udev Control Socket.
32m  OK   [0m] L[  116.595748] systemd[1]: Starting of Arbitrary Executable File Formats File System Automount Point not supported.
istening on Devi[  116.734726] systemd[1]: Created slice System Slice.
ce-mapper event [  116.810805] systemd[1]: Starting Increase datagram queue length...
daemon FIFOs.

[  116.900910] systemd[1]: Created slice system-getty.slice.
[ [32m  OK   [0m[  116.982328] systemd[1]: Created slice system-systemd\x2dfsck.slice.
] Listening on L[  117.074096] systemd[1]: Reached target Slices.
VM2 metadata dae[  117.144254] systemd[1]: Created slice system-openvpn.slice.
mon socket.

[ [  117.227831] systemd[1]: Created slice system-serial\x2dgetty.slice.
[32m  OK   [0m] [  117.319528] systemd[1]: Reached target Encrypted Volumes.
Listening on Jou[  117.401072] systemd[1]: Listening on Journal Socket.
rnal Socket (/de[  117.478190] systemd[1]: Starting Emulab fstab fixup (swap)...
v/log).

[ [32m[  117.564064] systemd[1]: Mounting Huge Pages File System...
  OK   [0m] List[  117.646598] systemd[1]: Starting Uncomplicated firewall...
ening on Journal[  117.729066] systemd[1]: Mounting POSIX Message Queue File System...
 Audit Socket.

[  117.820986] systemd[1]: Started Read required files in advance.

[ [32m  OK   [0[  117.912570] systemd[1]: Starting Load Kernel Modules...
m] Listening on [  117.992441] systemd[1]: Starting udev Coldplug all Devices...
LVM2 poll daemon[  118.073737] systemd[1]: Mounting RPC Pipe File System...
 socket.

[ [32[  118.154198] systemd[1]: Starting Create Static Device Nodes in /dev...
m  OK   [0m] Lis[  118.249170] systemd[1]: Starting Monitoring of LVM2 mirrors, snapshots etc. using dmeventd or progress polling...
tening on fsck t[  118.389087] systemd[1]: Starting Remount Root and Kernel File Systems...
o fsckd communic[  118.486094] systemd[1]: Starting Setup Virtual Console...
ation Socket.

[  118.567586] systemd[1]: Mounting Debug File System...
[ [32m  OK   [0m[  118.645750] systemd[1]: Mounted Huge Pages File System.
] Listening on /[  118.723506] systemd[1]: Mounted Debug File System.
dev/initctl Comp[  118.797675] systemd[1]: Mounted POSIX Message Queue File System.
atibility Named [  118.886831] systemd[1]: Started Increase datagram queue length.
Pipe.

[ [32m  [  118.974809] systemd[1]: Started Uncomplicated firewall.
OK   [0m] Create[  119.054211] systemd[1]: Started Load Kernel Modules.
d slice User and[  119.129660] EXT4-fs (sda1): re-mounted. Opts: (null)
[  119.130875] systemd[1]: run-rpc_pipefs.mount: Mount pr9.131105] systemd[1]: Failed to mount RPC Pipe File System.
 Session Slice.
[  119.372819] systemd[1]: Dependency failed for RPC security service for NFS server.

[ [32m  OK   [[  119.480340] systemd[1]: rpc-svcgssd.service: Job rpc-svcgssd.service/start failed with result 'dependency'.
0m] Listening on[  119.613987] systemd[1]: Dependency failed for RPC security service for NFS client and server.
 udev Kernel Soc[  119.732882] systemd[1]: rpc-gssd.service: Job rpc-gssd.service/start failed with result 'dependency'.
ket.

[ [32m  O[  119.732897] systemd[1]: run-rpc_pipefs.mount: Unit entered failed state.
K   [0m] Listeni[  119.733835] systemd[1]: Started Emulab fstab fixup (swap).
ng on udev Contr[  120.040895] systemd[1]: Started Create Static Device Nodes in /dev.
ol Socket.

[ [[  120.132199] systemd[1]: Started Remount Root and Kernel File Systems.
1;33mUNSUPP [0m][  120.226236] systemd[1]: Started Setup Virtual Console.
 Starting of Arb[  120.304670] systemd[1]: Started udev Coldplug all Devices.
itrary Executable Fi...m Automount Point not sup[  120.422339] systemd[1]: Started LVM2 metadata daemon.
ported.

[ [32m[  120.498502] systemd[1]: Starting Load/Save Random Seed...
  OK   [0m] Crea[  120.579788] systemd[1]: Starting udev Kernel Device Manager...
ted slice System[  120.665282] systemd[1]: Reached target Swap.
 Slice.

      [  120.733153] systemd[1]: Reached target NFS client services.
   Starting Incr[  120.817736] systemd[1]: Starting Apply Kernel Variables...
ease datagram queue length...

[ [32m  OK   [0m] Created slice system-getty.slice.

[ [32m  OK   [0m] Created slice system-systemd\x2dfsck.slice.

[ [32m  OK   [0m] Reached target Slices.

[ [32m  OK   [0m] Created slice system-openvpn.slice.

[ [32m  OK   [0m] Created slice system-serial\x2dgetty.slice.

[ [32m  OK   [0m] Reached target Encrypted Volumes.

[ [32m  OK   [0m] Listening on Journal Socket.

         Starting Emulab fstab fixup (swap)...

         Mounting Huge Pages File System...

         Starting Uncomplicated firewall...

         Mounting POSIX Message Queue File System...

[ [32m  OK   [0m] Started Read required files in advance.

ureadahead.service

         Starting Load Kernel Modules...

         Starting udev Coldplug all Devices...

         Mounting RPC Pipe File System...

         Starting Create Static Device Nodes in /dev...

         Starting Monitoring of LVM2 mirrors... dmeventd or progress polling...

         Starting Remount Root and Kernel File Systems...

         Starting Setup Virtual Console...

         Mounting Debug File System...

[ [32m  OK   [0m] Mounted Huge Pages File System.

dev-hugepages.mount

[ [32m  OK   [0m] Mounted Debug File System.

sys-kernel-debug.mount

[ [32m  OK   [0m] Mounted POSIX Message Queue File System.

dev-mqueue.mount

[ [32m  OK   [0m] Started Increase datagram queue length.

systemd-setup-dgram-qlen.service

[ [32m  OK   [0m] Started Uncomplicated firewall.

ufw.service

[ [32m  OK   [0m] Started Load Kernel Modules.

systemd-modules-load.service

[ [1;31mFAILED [0m] Failed to mount RPC Pipe File System.

See 'systemctl status run-rpc_pipefs.mount' for details.

[ [1;33mDEPEND [0m] Dependency failed for RPC security service for NFS server.

[ [1;33mDEPEND [0m] Dependency failed for RPC security service for NFS client and server.

[ [32m  OK   [0m] Started Emulab fstab fixup (swap).

[ [32m  OK   [0m] Started Create Static Device Nodes in /dev.

systemd-tmpfiles-setup-dev.service

[ [32m  OK   [0m] Started Remount Root and Kernel File Systems.

systemd-remount-fs.service

[ [32m  OK   [0m] Started Setup Virtual Console.

systemd-vconsole-setup.service

[ [32m  OK   [0m] Started udev Coldplug all Devices.

systemd-udev-trigger.service

[ [32m  OK   [0m] Started LVM2 metadata daemon.

lvm2-lvmetad.service

         Starting Load/Save Random Seed...

         Starting udev Kernel Device Manager...

[ [32m  OK   [0m] Reached target Swap.

[ [32m  OK   [0m] Reached target NFS client services.

         Starting Apply Kernel Variables...

         Mounting FUSE Control File System...

[ [32m  OK   [0m] Listening on Syslog Socket.

         Starting Journal Service...

[ [32m  OK   [0m] Mounted FUSE Control File System.

sys-fs-fuse-connections.mount

[ [32m  OK   [0m] Started udev Kernel Device Manager.

systemd-udevd.service

[ [32m  OK   [0m] Started Load/Save Random Seed.

systemd-random-seed.service

[ [32m  OK   [0m] Started Apply Kernel Variables.

systemd-sysctl.service

[ [32m  OK   [0m] Reached target Local File Systems (Pre).

[ [32m  OK   [0m] Started Monitoring of LVM2 mirrors,...ng dmeventd or progress polling.

lvm2-monitor.service

[ [32m  OK   [0m] Started Journal Service.

systemd-journald.service

         Starting Flush Journal to Persistent Storage...

[ [32m  OK   [0m] Started Flush Journal to Persistent Storage.

systemd-journal-flush.service

[ [32m  OK   [0m] Found device /dev/ttyS0.

[ [32m  OK   [0m] Created slice system-ifup.slice.

[ [32m  OK   [0m] Found device MT27520 Family [ConnectX-3 Pro].

[ [0m [31m*     [0m] A start job is running for dev-open...-swiftv1.device (11s / 1min 30s)
[K[ [1;31m* [0m [31m*     [0m] A start job is running for dev-open...-swiftv1.device (11s / 1min 30s)
[K[ [31m* [1;31m* [0m [31m*   [0m] A start job is running for dev-open...-swiftv1.device (12s / 1min 30s)
[K[ [31m* [1;31m* [0m [31m*   [0m] A start job is running for dev-open...-swiftv1.device (12s / 1min 30s)
[K[   [31m* [1;31m* [0m [31m* [0m] A start job is running for dev-open...-swiftv1.device (13s / 1min 30s)
[K[   [31m* [1;31m* [0m [31m* [0m] A start job is running for dev-open...-swiftv1.device (13s / 1min 30s)
[K[     [31m* [1;31m* [0m] A start job is running for dev-open...-swiftv1.device (14s / 1min 30s)
[K[     [31m* [0m] A start job is running for dev-open...-swiftv1.device (14s / 1min 30s)
[K[     [31m* [1;31m* [0m] A start job is running for dev-open...-swiftv1.device (15s / 1min 30s)
[K[   [31m* [1;31m* [0m [31m* [0m] A start job is running for dev-open...-swiftv1.device (15s / 1min 30s)
[K[   [31m* [1;31m* [0m [31m* [0m] A start job is running for dev-open...-swiftv1.device (16s / 1min 30s)
[K[ [31m* [1;31m* [0m [31m*   [0m] A start job is running for dev-open...-swiftv1.device (16s / 1min 30s)
[K[ [31m* [1;31m* [0m [31m*   [0m] A start job is running for dev-open...-swiftv1.device (17s / 1min 30s)
[K[ [1;31m* [0m [31m*     [0m] A start job is running for dev-open...-swiftv1.device (17s / 1min 30s)
[K[ [0m [31m*     [0m] A start job is running for dev-open...-swiftv1.device (18s / 1min 30s)
[K[ [1;31m* [0m [31m*     [0m] A start job is running for dev-open...-swiftv1.device (18s / 1min 30s)
[K[ [31m* [1;31m* [0m [31m*   [0m] A start job is running for dev-open...-swiftv1.device (19s / 1min 30s)
[K[ [31m* [1;31m* [0m [31m*   [0m] A start job is running for dev-open...-swiftv1.device (19s / 1min 30s)
[K[   [31m* [1;31m* [0m [31m* [0m] A start job is running for dev-open...-swiftv1.device (20s / 1min 30s)
[K[   [31m* [1;31m* [0m [31m* [0m] A start job is running for dev-open...-swiftv1.device (20s / 1min 30s)
[K[     [31m* [1;31m* [0m] A start job is running for dev-open...-swiftv1.device (21s / 1min 30s)
[K[     [31m* [0m] A start job is running for dev-open...-swiftv1.device (21s / 1min 30s)
[K[     [31m* [1;31m* [0m] A start job is running for dev-open...-swiftv1.device (22s / 1min 30s)
[K[   [31m* [1;31m* [0m [31m* [0m] A start job is running for dev-open...-swiftv1.device (22s / 1min 30s)
[K[   [31m* [1;31m* [0m [31m* [0m] A start job is running for dev-open...-swiftv1.device (23s / 1min 30s)
[K[ [31m* [1;31m* [0m [31m*   [0m] A start job is running for dev-open...-swiftv1.device (23s / 1min 30s)
[K[ [31m* [1;31m* [0m [31m*   [0m] A start job is running for dev-open...-swiftv1.device (24s / 1min 30s)
[K[ [1;31m* [0m [31m*     [0m] A start job is running for dev-open...-swiftv1.device (24s / 1min 30s)
[K[ [0m [31m*     [0m] A start job is running for dev-open...-swiftv1.device (25s / 1min 30s)
[K[ [1;31m* [0m [31m*     [0m] A start job is running for dev-open...-swiftv1.device (25s / 1min 30s)
[K[ [31m* [1;31m* [0m [31m*   [0m] A start job is running for dev-open...-swiftv1.device (26s / 1min 30s)
[K[ [31m* [1;31m* [0m [31m*   [0m] A start job is running for dev-open...-swiftv1.device (26s / 1min 30s)
[K[   [31m* [1;31m* [0m [31m* [0m] A start job is running for dev-open...-swiftv1.device (27s / 1min 30s)
[K[   [31m* [1;31m* [0m [31m* [0m] A start job is running for dev-open...-swiftv1.device (27s / 1min 30s)
[K[     [31m* [1;31m* [0m] A start job is running for dev-open...-swiftv1.device (28s / 1min 30s)
[K[     [31m* [0m] A start job is running for dev-open...-swiftv1.device (28s / 1min 30s)
[K[     [31m* [1;31m* [0m] A start job is running for dev-open...-swiftv1.device (29s / 1min 30s)
[K[   [31m* [1;31m* [0m [31m* [0m] A start job is running for dev-open...-swiftv1.device (29s / 1min 30s)
[K[   [31m* [1;31m* [0m [31m* [0m] A start job is running for dev-open...-swiftv1.device (30s / 1min 30s)
[K[ [31m* [1;31m* [0m [31m*   [0m] A start job is running for dev-open...-swiftv1.device (30s / 1min 30s)
[K[ [31m* [1;31m* [0m [31m*   [0m] A start job is running for dev-open...-swiftv1.device (31s / 1min 30s)
[K[ [1;31m* [0m [31m*     [0m] A start job is running for dev-open...-swiftv1.device (31s / 1min 30s)
[K[ [0m [31m*     [0m] A start job is running for dev-open...-swiftv1.device (32s / 1min 30s)
[K[ [1;31m* [0m [31m*     [0m] A start job is running for dev-open...-swiftv1.device (32s / 1min 30s)
[K[ [31m* [1;31m* [0m [31m*   [0m] A start job is running for dev-open...-swiftv1.device (33s / 1min 30s)
[K[ [31m* [1;31m* [0m [31m*   [0m] A start job is running for dev-open...-swiftv1.device (33s / 1min 30s)
[K[   [31m* [1;31m* [0m [31m* [0m] A start job is running for dev-open...-swiftv1.device (34s / 1min 30s)
[K[   [31m* [1;31m* [0m [31m* [0m] A start job is running for dev-open...-swiftv1.device (34s / 1min 30s)
[K[     [31m* [1;31m* [0m] A start job is running for dev-open...-swiftv1.device (35s / 1min 30s)
[K[     [31m* [0m] A start job is running for dev-open...-swiftv1.device (35s / 1min 30s)
[K[     [31m* [1;31m* [0m] A start job is running for dev-open...-swiftv1.device (36s / 1min 30s)
[K[   [31m* [1;31m* [0m [31m* [0m] A start job is running for dev-open...-swiftv1.device (36s / 1min 30s)
[K[   [31m* [1;31m* [0m [31m* [0m] A start job is running for dev-open...-swiftv1.device (37s / 1min 30s)
[K[ [31m* [1;31m* [0m [31m*   [0m] A start job is running for dev-open...-swiftv1.device (37s / 1min 30s)
[K[ [31m* [1;31m* [0m [31m*   [0m] A start job is running for dev-open...-swiftv1.device (38s / 1min 30s)
[K[ [1;31m* [0m [31m*     [0m] A start job is running for dev-open...-swiftv1.device (38s / 1min 30s)
[K[ [0m [31m*     [0m] A start job is running for dev-open...-swiftv1.device (39s / 1min 30s)
[K[ [1;31m* [0m [31m*     [0m] A start job is running for dev-open...-swiftv1.device (39s / 1min 30s)
[K[ [31m* [1;31m* [0m [31m*   [0m] A start job is running for dev-open...-swiftv1.device (40s / 1min 30s)
[K[ [31m* [1;31m* [0m [31m*   [0m] A start job is running for dev-open...-swiftv1.device (40s / 1min 30s)
[K[   [31m* [1;31m* [0m [31m* [0m] A start job is running for dev-open...-swiftv1.device (41s / 1min 30s)
[K[   [31m* [1;31m* [0m [31m* [0m] A start job is running for dev-open...-swiftv1.device (41s / 1min 30s)
[K[     [31m* [1;31m* [0m] A start job is running for dev-open...-swiftv1.device (42s / 1min 30s)
[K[     [31m* [0m] A start job is running for dev-open...-swiftv1.device (42s / 1min 30s)
[K[     [31m* [1;31m* [0m] A start job is running for dev-open...-swiftv1.device (43s / 1min 30s)
[K[   [31m* [1;31m* [0m [31m* [0m] A start job is running for dev-open...-swiftv1.device (43s / 1min 30s)
[K[   [31m* [1;31m* [0m [31m* [0m] A start job is running for dev-open...-swiftv1.device (44s / 1min 30s)
[K[ [31m* [1;31m* [0m [31m*   [0m] A start job is running for dev-open...-swiftv1.device (44s / 1min 30s)
[K[ [31m* [1;31m* [0m [31m*   [0m] A start job is running for dev-open...-swiftv1.device (45s / 1min 30s)
[K[ [1;31m* [0m [31m*     [0m] A start job is running for dev-open...-swiftv1.device (45s / 1min 30s)
[K[ [0m [31m*     [0m] A start job is running for dev-open...-swiftv1.device (46s / 1min 30s)
[K[ [1;31m* [0m [31m*     [0m] A start job is running for dev-open...-swiftv1.device (46s / 1min 30s)
[K[ [31m* [1;31m* [0m [31m*   [0m] A start job is running for dev-open...-swiftv1.device (47s / 1min 30s)
[K[ [31m* [1;31m* [0m [31m*   [0m] A start job is running for dev-open...-swiftv1.device (47s / 1min 30s)
[K[   [31m* [1;31m* [0m [31m* [0m] A start job is running for dev-open...-swiftv1.device (48s / 1min 30s)
[K[   [31m* [1;31m* [0m [31m* [0m] A start job is running for dev-open...-swiftv1.device (48s / 1min 30s)
[K[     [31m* [1;31m* [0m] A start job is running for dev-open...-swiftv1.device (49s / 1min 30s)
[K[     [31m* [0m] A start job is running for dev-open...-swiftv1.device (49s / 1min 30s)
[K[     [31m* [1;31m* [0m] A start job is running for dev-open...-swiftv1.device (50s / 1min 30s)
[K[   [31m* [1;31m* [0m [31m* [0m] A start job is running for dev-open...-swiftv1.device (50s / 1min 30s)
[K[   [31m* [1;31m* [0m [31m* [0m] A start job is running for dev-open...-swiftv1.device (51s / 1min 30s)
[K[ [31m* [1;31m* [0m [31m*   [0m] A start job is running for dev-open...-swiftv1.device (51s / 1min 30s)
[K[ [31m* [1;31m* [0m [31m*   [0m] A start job is running for dev-open...-swiftv1.device (52s / 1min 30s)
[K[ [1;31m* [0m [31m*     [0m] A start job is running for dev-open...-swiftv1.device (52s / 1min 30s)
[K[ [0m [31m*     [0m] A start job is running for dev-open...-swiftv1.device (53s / 1min 30s)
[K[ [1;31m* [0m [31m*     [0m] A start job is running for dev-open...-swiftv1.device (53s / 1min 30s)
[K[ [31m* [1;31m* [0m [31m*   [0m] A start job is running for dev-open...-swiftv1.device (54s / 1min 30s)
[K[ [31m* [1;31m* [0m [31m*   [0m] A start job is running for dev-open...-swiftv1.device (54s / 1min 30s)
[K[   [31m* [1;31m* [0m [31m* [0m] A start job is running for dev-open...-swiftv1.device (55s / 1min 30s)
[K[   [31m* [1;31m* [0m [31m* [0m] A start job is running for dev-open...-swiftv1.device (55s / 1min 30s)
[K[     [31m* [1;31m* [0m] A start job is running for dev-open...-swiftv1.device (56s / 1min 30s)
[K[     [31m* [0m] A start job is running for dev-open...-swiftv1.device (56s / 1min 30s)
[K[     [31m* [1;31m* [0m] A start job is running for dev-open...-swiftv1.device (57s / 1min 30s)
[K[   [31m* [1;31m* [0m [31m* [0m] A start job is running for dev-open...-swiftv1.device (57s / 1min 30s)
[K[   [31m* [1;31m* [0m [31m* [0m] A start job is running for dev-open...-swiftv1.device (58s / 1min 30s)
[K[ [31m* [1;31m* [0m [31m*   [0m] A start job is running for dev-open...-swiftv1.device (58s / 1min 30s)
[K[ [31m* [1;31m* [0m [31m*   [0m] A start job is running for dev-open...-swiftv1.device (59s / 1min 30s)
[K[ [1;31m* [0m [31m*     [0m] A start job is running for dev-open...-swiftv1.device (59s / 1min 30s)
[K[ [0m [31m*     [0m] A start job is running for dev-open...swiftv1.device (1min / 1min 30s)
[K[ [1;31m* [0m [31m*     [0m] A start job is running for dev-open...swiftv1.device (1min / 1min 30s)
[K[ [31m* [1;31m* [0m [31m*   [0m] A start job is running for dev-open...ftv1.device (1min 1s / 1min 30s)
[K[ [31m* [1;31m* [0m [31m*   [0m] A start job is running for dev-open...ftv1.device (1min 1s / 1min 30s)
[K[   [31m* [1;31m* [0m [31m* [0m] A start job is running for dev-open...ftv1.device (1min 2s / 1min 30s)
[K[   [31m* [1;31m* [0m [31m* [0m] A start job is running for dev-open...ftv1.device (1min 2s / 1min 30s)
[K[     [31m* [1;31m* [0m] A start job is running for dev-open...ftv1.device (1min 3s / 1min 30s)
[K[     [31m* [0m] A start job is running for dev-open...ftv1.device (1min 3s / 1min 30s)
[K[     [31m* [1;31m* [0m] A start job is running for dev-open...ftv1.device (1min 4s / 1min 30s)
[K[   [31m* [1;31m* [0m [31m* [0m] A start job is running for dev-open...ftv1.device (1min 4s / 1min 30s)
[K[   [31m* [1;31m* [0m [31m* [0m] A start job is running for dev-open...ftv1.device (1min 5s / 1min 30s)
[K[ [31m* [1;31m* [0m [31m*   [0m] A start job is running for dev-open...ftv1.device (1min 5s / 1min 30s)
[K[ [31m* [1;31m* [0m [31m*   [0m] A start job is running for dev-open...ftv1.device (1min 6s / 1min 30s)
[K[ [1;31m* [0m [31m*     [0m] A start job is running for dev-open...ftv1.device (1min 6s / 1min 30s)
[K[ [0m [31m*     [0m] A start job is running for dev-open...ftv1.device (1min 7s / 1min 30s)
[K[ [1;31m* [0m [31m*     [0m] A start job is running for dev-open...ftv1.device (1min 7s / 1min 30s)
[K[ [31m* [1;31m* [0m [31m*   [0m] A start job is running for dev-open...ftv1.device (1min 8s / 1min 30s)
[K[ [31m* [1;31m* [0m [31m*   [0m] A start job is running for dev-open...ftv1.device (1min 8s / 1min 30s)
[K[   [31m* [1;31m* [0m [31m* [0m] A start job is running for dev-open...ftv1.device (1min 9s / 1min 30s)
[K[   [31m* [1;31m* [0m [31m* [0m] A start job is running for dev-open...ftv1.device (1min 9s / 1min 30s)
[K[     [31m* [1;31m* [0m] A start job is running for dev-open...tv1.device (1min 10s / 1min 30s)
[K[     [31m* [0m] A start job is running for dev-open...tv1.device (1min 10s / 1min 30s)
[K[     [31m* [1;31m* [0m] A start job is running for dev-open...tv1.device (1min 11s / 1min 30s)
[K[   [31m* [1;31m* [0m [31m* [0m] A start job is running for dev-open...tv1.device (1min 11s / 1min 30s)
[K[   [31m* [1;31m* [0m [31m* [0m] A start job is running for dev-open...tv1.device (1min 12s / 1min 30s)
[K[ [31m* [1;31m* [0m [31m*   [0m] A start job is running for dev-open...tv1.device (1min 12s / 1min 30s)
[K[ [31m* [1;31m* [0m [31m*   [0m] A start job is running for dev-open...tv1.device (1min 13s / 1min 30s)
[K[ [1;31m* [0m [31m*     [0m] A start job is running for dev-open...tv1.device (1min 13s / 1min 30s)
[K[ [0m [31m*     [0m] A start job is running for dev-open...tv1.device (1min 14s / 1min 30s)
[K[ [1;31m* [0m [31m*     [0m] A start job is running for dev-open...tv1.device (1min 14s / 1min 30s)
[K[ [31m* [1;31m* [0m [31m*   [0m] A start job is running for dev-open...tv1.device (1min 15s / 1min 30s)
[K[ [31m* [1;31m* [0m [31m*   [0m] A start job is running for dev-open...tv1.device (1min 15s / 1min 30s)
[K[   [31m* [1;31m* [0m [31m* [0m] A start job is running for dev-open...tv1.device (1min 16s / 1min 30s)
[K[   [31m* [1;31m* [0m [31m* [0m] A start job is running for dev-open...tv1.device (1min 16s / 1min 30s)
[K[     [31m* [1;31m* [0m] A start job is running for dev-open...tv1.device (1min 17s / 1min 30s)
[K[     [31m* [0m] A start job is running for dev-open...tv1.device (1min 17s / 1min 30s)
[K[     [31m* [1;31m* [0m] A start job is running for dev-open...tv1.device (1min 18s / 1min 30s)
[K[   [31m* [1;31m* [0m [31m* [0m] A start job is running for dev-open...tv1.device (1min 18s / 1min 30s)
[K[   [31m* [1;31m* [0m [31m* [0m] A start job is running for dev-open...tv1.device (1min 19s / 1min 30s)
[K[ [31m* [1;31m* [0m [31m*   [0m] A start job is running for dev-open...tv1.device (1min 19s / 1min 30s)
[K[ [31m* [1;31m* [0m [31m*   [0m] A start job is running for dev-open...tv1.device (1min 20s / 1min 30s)
[K[ [1;31m* [0m [31m*     [0m] A start job is running for dev-open...tv1.device (1min 20s / 1min 30s)
[K[ [0m [31m*     [0m] A start job is running for dev-open...tv1.device (1min 21s / 1min 30s)
[K[ [1;31m* [0m [31m*     [0m] A start job is running for dev-open...tv1.device (1min 21s / 1min 30s)
[K[ [31m* [1;31m* [0m [31m*   [0m] A start job is running for dev-open...tv1.device (1min 22s / 1min 30s)
[K[ [31m* [1;31m* [0m [31m*   [0m] A start job is running for dev-open...tv1.device (1min 22s / 1min 30s)
[K[   [31m* [1;31m* [0m [31m* [0m] A start job is running for dev-open...tv1.device (1min 23s / 1min 30s)
[K[   [31m* [1;31m* [0m [31m* [0m] A start job is running for dev-open...tv1.device (1min 23s / 1min 30s)
[K[     [31m* [1;31m* [0m] A start job is running for dev-open...tv1.device (1min 24s / 1min 30s)
[K[     [31m* [0m] A start job is running for dev-open...tv1.device (1min 24s / 1min 30s)
[K[     [31m* [1;31m* [0m] A start job is running for dev-open...tv1.device (1min 25s / 1min 30s)
[K[   [31m* [1;31m* [0m [31m* [0m] A start job is running for dev-open...tv1.device (1min 25s / 1min 30s)
[K[   [31m* [1;31m* [0m [31m* [0m] A start job is running for dev-open...tv1.device (1min 26s / 1min 30s)
[K[ [31m* [1;31m* [0m [31m*   [0m] A start job is running for dev-open...tv1.device (1min 26s / 1min 30s)
[K[ [31m* [1;31m* [0m [31m*   [0m] A start job is running for dev-open...tv1.device (1min 27s / 1min 30s)
[K[ [1;31m* [0m [31m*     [0m] A start job is running for dev-open...tv1.device (1min 27s / 1min 30s)
[K[ [0m [31m*     [0m] A start job is running for dev-open...tv1.device (1min 28s / 1min 30s)
[K[ [1;31m* [0m [31m*     [0m] A start job is running for dev-open...tv1.device (1min 28s / 1min 30s)
[K[ [31m* [1;31m* [0m [31m*   [0m] A start job is running for dev-open...tv1.device (1min 29s / 1min 30s)
[K[ [31m* [1;31m* [0m [31m*   [0m] A start job is running for dev-open...tv1.device (1min 29s / 1min 30s)
[K[   [31m* [1;31m* [0m [31m* [0m] A start job is running for dev-open...tv1.device (1min 30s / 1min 30s)
[K[ [1;31m TIME [0m] Timed out waitin[  205.607615] mlx4_en: enp1s0:   frag:0 - size:1518 prefix:0 stride:1536
g for device dev-openstack\x2dvolumes-swiftv1.device.

[ [1;33mDEPEND [0m] Dependency failed for File System C... /dev/openstack-volumes/swiftv1.

[ [1;33mDEPEND [0m] Dependency failed for /storage/mnt/swift/swiftv1.

[ [1;33mDEPEND [0m] Dependency failed for Local File Systems.

[ [1;33mDEPEND [0m] Dependency failed for Clean up any mess left by 0dns-up.

         Starting Preprocess NFS configuration...

         Starting Set console keymap...

[ [32m  OK   [0m] Stopped target Graphical Interface.

[ [32m  OK   [0m] Started Stop ureadahead data collection 45s after completed startup.

[ [32m  OK   [0m] Stopped Accounts Service.

[ [32m  OK   [0m] Stopped Serial Getty on ttyS0.

[ [32m  OK   [0m] Closed ACPID Listen Socket.

[ [32m  OK   [0m] Stopped Trigger resolvconf update for networkd DNS.

[ [32m  OK   [0m] Stopped Getty on tty1.

[ [32m  OK   [0m] Closed PC/SC Smart Card Daemon Activation Socket.

[ [32m  OK   [0m] Stopped getty on tty2-tty6 if dbus and logind are not available.

[ [32m  OK   [0m] Stopped Daily Cleanup of Temporary Directories.

[ [32m  OK   [0m] Reached target Timers.

[ [32m  OK   [0m] Closed UUID daemon activation socket.

[ [32m  OK   [0m] Stopped target Multi-User System.

[ [32m  OK   [0m] Stopped OpenStack Compute novncproxy.

[ [32m  OK   [0m] Stopped LSB: Set the CPU Frequency Scaling governor to "ondemand".

[ [32m  OK   [0m] Stopped Ceilometer Collector.

[ [32m  OK   [0m] Stopped fast remote file copy program daemon.

[ [32m  OK   [0m] Stopped Ceilometer API.

[ [32m  OK   [0m] Stopped LSB: Apache2 web server.

[ [32m  OK   [0m] Stopped OpenStack Compute Cert.

[ [32m  OK   [0m] Stopped LSB: Swift object server.

[ [32m  OK   [0m] Stopped OpenStack Compute Conductor.

[ [32m  OK   [0m] Stopped Heat API.

[ [32m  OK   [0m] Stopped Terminate Plymouth Boot Screen.

[ [32m  OK   [0m] Stopped Wait for Plymouth Boot Screen to Quit.

[ [32m  OK   [0m] Stopped LSB: Swift account replicator.

[ [32m  OK   [0m] Stopped Heat API.

[ [32m  OK   [0m] Stopped LSB: Swift container server.

[ [32m  OK   [0m] Stopped LSB: Start and stop pubsubd.

[ [32m  OK   [0m] Stopped LSB: Swift container sync.

[ [32m  OK   [0m] Stopped Restore /etc/resolv.conf if...ore the ppp link was shut down..

[ [32m  OK   [0m] Stopped OpenStack Cinder Volume.

[ [32m  OK   [0m] Stopped Deferred execution scheduler.

[ [32m  OK   [0m] Stopped LSB: daemon to balance interrupts for SMP systems.

[ [32m  OK   [0m] Stopped D-Bus System Message Bus.

[ [32m  OK   [0m] Stopped OpenStack Cinder Api.

[ [32m  OK   [0m] Stopped Permit User Sessions.

[ [32m  OK   [0m] Stopped OpenVPN connection to server.

[ [32m  OK   [0m] Stopped OpenVPN service.

[ [32m  OK   [0m] Stopped OpenStack Cinder Scheduler.

[ [32m  OK   [0m] Stopped Openstack Trove API.

[ [32m  OK   [0m] Stopped /etc/rc.local Compatibility.

[ [32m  OK   [0m] Stopped memcached daemon.

[ [32m  OK   [0m] Stopped OpenStack Image Service API.

[ [32m  OK   [0m] Stopped LSB: Swift object auditor.

[ [32m  OK   [0m] Stopped Ceilometer Alarm Evaluator.

[ [32m  OK   [0m] Stopped LSB: Swift account reaper.

[ [32m  OK   [0m] Stopped Open vSwitch.

[ [32m  OK   [0m] Stopped Open vSwitch Internal Unit.

[ [32m  OK   [0m] Reached target Network (Pre).

[ [32m  OK   [0m] Stopped LSB: Swift object updater.

[ [32m  OK   [0m] Stopped LSB: Swift container updater.

[ [32m  OK   [0m] Stopped Login Service.

[ [32m  OK   [0m] Closed D-Bus System Message Bus Socket.

[ [32m  OK   [0m] Stopped LSB: Swift object replicator.

[ [32m  OK   [0m] Stopped LSB: Swift proxy server.

[ [32m  OK   [0m] Stopped LSB: Swift container replicator.

[ [32m  OK   [0m] Stopped OpenStack Neutron Server.

[ [32m  OK   [0m] Stopped Ceilometer Alarm Notifier.

[ [32m  OK   [0m] Stopped Openstack Trove Conductor.

[ [32m  OK   [0m] Stopped OpenStack Compute Scheduler.

[ [32m  OK   [0m] Stopped Ceilometer Notification Agent.

[ [32m  OK   [0m] Stopped LSB: automatic crash report generation.

[ [32m  OK   [0m] Stopped Openstack Trove Task Manager.

[ [32m  OK   [0m] Stopped OpenBSD Secure Shell server.

[ [32m  OK   [0m] Stopped Testbed Services.

[ [32m  OK   [0m] Stopped Regular background program processing daemon.

[ [32m  OK   [0m] Stopped OpenStack Sahara Api & Engine servers.

[ [32m  OK   [0m] Stopped LSB: Recovers broken jove sessions..

[ [32m  OK   [0m] Stopped OpenStack Sahara API server.

[ [32m  OK   [0m] Stopped System Logging Service.

[ [32m  OK   [0m] Closemergency.servicWelcome to emergGive root password for maintenance
(or press Control-D to continue): 




David M. Johnson

unread,
Feb 27, 2016, 5:56:51 PM2/27/16
to cloudla...@googlegroups.com
On 02/26/16 17:32, Jacob Everist wrote:
> Hello all,
>
> I've been doing my best all week to try and solve this problem. I am
> trying to create an Openstack controller node with a prepared custom
> image in the glance repository available for spinning up an instance.

I think it is going to be much easier if you can do something like: save
off your glance snapshots, store them to (semi-) permanent Cloudlab
storage, and import the snapshot into a new experiment as a glance
image. Or, a more complicated (and maybe impossible) thing to do would
be to dump (at an SQL *and* filesystem level) glance's state so that
image/snapshot relationships remain intact, and restore all that state
in the new experiment with the same SQL relationships and UUID/path info
on disk...

(I say that is much easier because the profile does not support
re-instantiating a new experiment from custom images that already have
configuration and state in the openstack databases. The primary
complicating factors are the raw public IP addresses (both the control
network IP addresses for the Cloudlab machines, and the floating IPs
that the profile requests on your behalf), which are (necessarily)
placed by the setup scripts into various bits of openstack configuration
where we can't use FQDNs. When you create a new experiment, the odds
that you will obtain the same set of machines, with the same physical
control net IPs, is slim. You *can* request the same exact machines,
but this is not something we can provide to Cloudlab users en masse
(just imagine the scheduling nightmares). But that's not the worst of
it. Many cloudlab machines have multiple disks, and the profile tries
to use the second disk as an LVM physical volume. Thus, we'd have to
save that too. Sure, this *could* all be handled, but it's pretty hard.)

If you want to pursue what I suggest in the first paragraph -- here's
what I would suggest to support that. Some Cloudlab clusters allow you
to create long-term datasets, in the form of network-accessible block
devices that can be attached to nodes within your experiment. I've just
extended the Openstack profile to support attaching a dataset to one of
its nodes. Make sure you use the latest version of the OpenStack
profile (your copy of the profile doesn't have the necessary code in the
geni-lib script to handle blockstores, but the latest version does).

To do this, first create a long-term dataset, by going to the Cloudlab
web Portal, and clicking "Create Dataset" from the Actions drop-down
menu. Type in a name for your dataset; select "Long term", and choose
an appropriate size. Then select a cluster. Remember this cluster,
because you'll need to choose it again when you create your Openstack
experiment. Once you create your dataset, you'll see its status page,
on which will be the dataset's URN. Save this for the next step...

Then, create your Openstack experiment, but this time, when you get to
the Parameter dialogue, expand the "Advanced Parameters", find the
"Remote Block Store URN" about halfway down; and then enter the URN of
your dataset from the previous step. Then create your experiment on the
cluster on which you created your dataset.

Hope this is helpful...

David
> --
> You received this message because you are subscribed to the Google
> Groups "cloudlab-users" group.
> To unsubscribe from this group and stop receiving emails from it, send
> an email to cloudlab-user...@googlegroups.com
> <mailto:cloudlab-user...@googlegroups.com>.
> To post to this group, send email to cloudla...@googlegroups.com
> <mailto:cloudla...@googlegroups.com>.
> To view this discussion on the web visit
> https://groups.google.com/d/msgid/cloudlab-users/c7f271ee-d6af-40f0-accc-2ceb30d7eede%40googlegroups.com
> <https://groups.google.com/d/msgid/cloudlab-users/c7f271ee-d6af-40f0-accc-2ceb30d7eede%40googlegroups.com?utm_medium=email&utm_source=footer>.
> For more options, visit https://groups.google.com/d/optout.

David M. Johnson

unread,
Mar 3, 2016, 2:04:47 PM3/3/16
to Jacob Everist, cloudlab-users
On 03/03/16 11:42, Jacob Everist wrote:
> David,
>
> I see that the long-term data storage attaches to the controller node
> and establishes a network connection. I found that I needed to check
> the "multiplex flat networks" option for the experiment to start up
> properly because my node ran out of network interfaces.

Yes, you'll need to specify the multiplex flag when using clusters that
don't have at least 2 experiment network interfaces per compute node, if
you leave the default of one flat network, and add a connection to a
long-term dataset. Sorry I didn't document it like I did for the LAN
options; I didn't think about it here.

> I have question. How does the storage become available to the
> controller node? Does it establish a partition? I noticed that there
> is a new device /dev/sdX that is mounted on the file system on
> /storage. Furthermore, a Swift directory is created within. Is this
> the dataset partition? Is intended to persist the Swift Object Store
> between experiments? I haven't been able to see that.

Sorry, no, this is a bug in the default parameters. I forgot I'm
already using /storage elsewhere in the profile to mount local LVMs at.
I'll change the default value for the parameter and add a warning, but
in the meantime you can just change the blockstore mount point to
"/dataset" -- that's what I'll change it to.

Thanks for trying this stuff out!

David
[ snip ]

> --
> Jacob Everist, Ph.D.
>
> cell: 310-425-9732
> email: jacob....@gmail.com <mailto:jacob....@gmail.com>

Jacob Everist

unread,
Mar 3, 2016, 2:55:40 PM3/3/16
to David M. Johnson, cloudlab-users
I was able to successfully persist data on the /dataset partition between two subsequent experiments on the APT cluster.


Jacob Everist

unread,
Mar 21, 2016, 5:37:07 PM3/21/16
to David M. Johnson, cloudlab-users
David,

What is your envisioned scenario for reading or writing data to the Data Set partition from a cloud instance?  If I was generating data from an experiment on a cloud instance, the only thing I can think of for pushing results to a permanent store is to SCP the data from the instance to the controller node and write it to the data set partition.

Is there some other way to access this partition from a cloud instance?

David M. Johnson

unread,
Mar 21, 2016, 5:55:31 PM3/21/16
to Jacob Everist, cloudlab-users
On 03/21/16 15:37, Jacob Everist wrote:
> David,
>
> What is your envisioned scenario for reading or writing data to the Data
> Set partition from a cloud instance? If I was generating data from an
> experiment on a cloud instance, the only thing I can think of for
> pushing results to a permanent store is to SCP the data from the
> instance to the controller node and write it to the data set partition.
>
> Is there some other way to access this partition from a cloud instance?

Well, another idea would be to use the dataset as an LVM physical volume
that Cinder uses to create logical volumes that can be attached to
instances. You'd umount it, add its block device as an LVM physical
volume using `pvcreate', then use vgcreate to create a volume group atop
that PV.

Then I'd reconfigure cinder to use *that* LVM volume group, instead of
the one the profile sets up, restart cinder, and create cinder volumes
and attach them to my instances. Then later on, if you want to access
the instance data from the controller or something else later, you can
attach the dataset to a different node or experiment, and mount the LVM
logical volumes (created by cinder) wherever you're wanting to view/edit
the instance data.

Of course, in this case, if you want the cinder volumes to persist
across experiments, you'll presumably have to harvest that metadata from
the cinder SQL database on the controller, and push it back into the
cinder db in your next experiment.

I don't know if all this will work, but it's probably what I'd try, at
least if the Cinder db state save/restore is straightforward.

David
> <mailto:john...@flux.utah.edu <mailto:john...@flux.utah.edu>>>
> > cell: 310-425-9732 <tel:310-425-9732>
> > email: jacob....@gmail.com <mailto:jacob....@gmail.com>
> <mailto:jacob....@gmail.com <mailto:jacob....@gmail.com>>
>
>
>
>
> --
> Jacob Everist, Ph.D.
>
> cell: 310-425-9732 <tel:310-425-9732>
> email: jacob....@gmail.com <mailto:jacob....@gmail.com>
>
>
>
>
> --
> Jacob Everist, Ph.D.
>
> cell: 310-425-9732
> email: jacob....@gmail.com <mailto:jacob....@gmail.com>

Reply all
Reply to author
Forward
0 new messages