Disk space for physical node

697 views
Skip to first unread message

asep...@gmail.com

unread,
Feb 10, 2015, 8:14:48 AM2/10/15
to cloudla...@googlegroups.com
Hi all,

Is it possible to specify disk space for physical node instance? How can I do that?
I found that I only have around 5.8 GB for / .
Here's df -h output from my experiment node:
Filesystem                                              Size  Used Avail Use% Mounted on
udev                                                     10M     0   10M   0% /dev
tmpfs                                                   1.6G  384K  1.6G   1% /run
/dev/disk/by-uuid/947019a0-18b4-4ae6-bc83-b2bb42903afd  5.8G  2.7G  2.9G  49% /
tmpfs                                                   5.0M     0  5.0M   0% /run/lock
tmpfs                                                   3.2G  3.1M  3.2G   1% /run/shm
ops.apt.emulab.net:/share                               243G   16G  208G   7% /share
ops.apt.emulab.net:/proj/cloudlab-tomato-PG0            100G  4.2G   96G   5% /proj/cloudlab-tomato-PG0
/dev/fuse                                                30M   16K   30M   1% /etc/pve

I might need more disk space for database and storing proxmox vm templates.
There's bigger space in /share but it seems read-only.


Thank you.


Regards,

Asep

Leigh Stoller

unread,
Feb 10, 2015, 9:51:31 AM2/10/15
to asep...@gmail.com, cloudla...@googlegroups.com
> Is it possible to specify disk space for physical node instance? How can I do that?
> I found that I only have around 5.8 GB for / .

Yes, but a question: Do you need space for a lot of read-only data or
do you need read-write space?

Also, I notice you are using the APT cluster, is that the only cluster
you are using (only need your data there)?

Mostly this question refers to whether the space needs to be in the image
for the next experiment (read-write) or if it can be just on the APT
cluster in a datastore (RW or RO) that is available to users of your
profile. The RW vs RO question is important here; only one experiment at a
time can use a RW datastore, but many experiments at a time can use a RO
datastore.

Depending on your needs, we can definitely provide the mechanism.

Leigh





Asep Noor Mukhdari Sutrisna

unread,
Feb 10, 2015, 10:31:39 AM2/10/15
to Leigh Stoller, cloudla...@googlegroups.com

> On Feb 10, 2015, at 3:51 PM, Leigh Stoller <lbst...@gmail.com> wrote:
>
>> Is it possible to specify disk space for physical node instance? How can I do that?
>> I found that I only have around 5.8 GB for / .
>
> Yes, but a question: Do you need space for a lot of read-only data or
> do you need read-write space?

I need a read-write space, because there's probably template addition and synchronisation between nodes while doing an experiment.

> Also, I notice you are using the APT cluster, is that the only cluster
> you are using (only need your data there)?

Not only APT Cluster, my plan is to make use of all CloudLab cluster that has available x86 hardware.


> Mostly this question refers to whether the space needs to be in the image
> for the next experiment (read-write) or if it can be just on the APT
> cluster in a datastore (RW or RO) that is available to users of your
> profile. The RW vs RO question is important here; only one experiment at a
> time can use a RW datastore, but many experiments at a time can use a RO
> datastore.
>
> Depending on your needs, we can definitely provide the mechanism.

I was thinking to link my template and data to /proj/cloudlab-tomato* but then realise that the mount point differs on each cluster.
Thank you.

Regards,

Asep

Leigh Stoller

unread,
Feb 10, 2015, 12:07:20 PM2/10/15
to Asep Noor Mukhdari Sutrisna, cloudla...@googlegroups.com
> I was thinking to link my template and data to /proj/cloudlab-tomato* but
> then realise that the mount point differs on each cluster.

Okay, given your needs, you will want to do this on your node:

pc> sudo /usr/local/etc/emulab/mkextrafs.pl /somedir

/somedir (or whatever) should not exist. This will bring in the last
partition on the primary drive. Lots of space there.

Very important; before you make the snapshot, I need to do something to the
database to ensure that last partition is saved as part of the image. I
just need to do it the first time, subsequent snapshots will do the right
thing.

Now, what you need to be careful of is putting 100s of GB of stuff on this
partition and then taking repeated snapshots. You will run out of disk
space on the file server pretty quickly. If that is part of the workflow,
we need to come up with another plan.

Leigh





Asep Noor Mukhdari Sutrisna

unread,
Feb 10, 2015, 12:38:14 PM2/10/15
to Leigh Stoller, cloudla...@googlegroups.com

> On Feb 10, 2015, at 6:07 PM, Leigh Stoller <lbst...@gmail.com> wrote:
>
>> I was thinking to link my template and data to /proj/cloudlab-tomato* but
>> then realise that the mount point differs on each cluster.
>
> Okay, given your needs, you will want to do this on your node:
>
> pc> sudo /usr/local/etc/emulab/mkextrafs.pl /somedir
>
> /somedir (or whatever) should not exist. This will bring in the last
> partition on the primary drive. Lots of space there.

Hi, I tried that command, but it gave me errors:
root@thm:/users/sutrisna# /usr/local/etc/emulab/mkextrafs.pl /tomato
*** /usr/local/etc/emulab/mkextrafs.pl:
/tomato does not exist!
root@thm:/users/sutrisna# mkdir /tomato
root@thm:/users/sutrisna# /usr/local/etc/emulab/mkextrafs.pl /tomato
Error: Could not stat device /dev/hda - No such file or directory.
*** /usr/local/etc/emulab/mkextrafs.pl:
Could not write dos label to /dev/hda!



> Very important; before you make the snapshot, I need to do something to the
> database to ensure that last partition is saved as part of the image. I
> just need to do it the first time, subsequent snapshots will do the right
> thing.
>
> Now, what you need to be careful of is putting 100s of GB of stuff on this
> partition and then taking repeated snapshots. You will run out of disk
> space on the file server pretty quickly. If that is part of the workflow,
> we need to come up with another plan.

I’m not planning to make snapshot with the extra partition already filled. The partition should just exist and empty so it can be filled during the experiment. When the experiment is done, then the data will be gone too.


Regards,

Asep

Leigh Stoller

unread,
Feb 10, 2015, 12:44:28 PM2/10/15
to Asep Noor Mukhdari Sutrisna, cloudla...@googlegroups.com
> Hi, I tried that command, but it gave me errors:
> root@thm:/users/sutrisna# /usr/local/etc/emulab/mkextrafs.pl /tomato
> *** /usr/local/etc/emulab/mkextrafs.pl:
> /tomato does not exist!
> root@thm:/users/sutrisna# mkdir /tomato
> root@thm:/users/sutrisna# /usr/local/etc/emulab/mkextrafs.pl /tomato
> Error: Could not stat device /dev/hda - No such file or directory.
> *** /usr/local/etc/emulab/mkextrafs.pl:
> Could not write dos label to /dev/hda!

Hmm, I guess the built in default for the device is wrong. Give me
a minute to figure what the right option is.

Leigh





Leigh Stoller

unread,
Feb 10, 2015, 12:55:38 PM2/10/15
to Asep Noor Mukhdari Sutrisna, cloudla...@googlegroups.com
> I’m not planning to make snapshot with the extra partition already
> filled. The partition should just exist and empty so it can be filled
> during the experiment. When the experiment is done, then the data will be
> gone too.

Actually, since you do not need to retain the info across experiments,
there is a much better way to do this. Let me dig up the syntax.

Leigh





Leigh Stoller

unread,
Feb 10, 2015, 1:07:51 PM2/10/15
to Asep Noor Mukhdari Sutrisna, cloudla...@googlegroups.com
> Actually, since you do not need to retain the info across experiments,
> there is a much better way to do this. Let me dig up the syntax.

So the most portable (within Protogeni) way to do this is this syntax
in your rspec:

<node client_id="n1" exclusive="true">
<emulab:blockstore name="b1"
size="10GB"
class="local"
mountpoint="/foo" />
</node>

You might need to add this in the rspec header:

xmlns:emulab="http://www.protogeni.net/resources/rspec/ext/emulab/1"

This will be temporary space that will disappear when the experiment is
terminated, and it will work across the x86 clusters (not on the Cloudlab
moonshot cluster, not yet).

Note that we might still need to do another snapshot to update the Emulab
clientside on your disk image, if your image does not have recent enough
code, but that is easy to do too.

Leigh





Asep Noor Mukhdari Sutrisna

unread,
Feb 10, 2015, 1:20:16 PM2/10/15
to Leigh Stoller, cloudla...@googlegroups.com

> On Feb 10, 2015, at 7:07 PM, Leigh Stoller <lbst...@gmail.com> wrote:
>
>> Actually, since you do not need to retain the info across experiments,
>> there is a much better way to do this. Let me dig up the syntax.
>
> So the most portable (within Protogeni) way to do this is this syntax
> in your rspec:
>
> <node client_id="n1" exclusive="true">
> <emulab:blockstore name="b1"
> size="10GB"
> class="local"
> mountpoint="/foo" />
> </node>
>
> You might need to add this in the rspec header:
>
> xmlns:emulab="http://www.protogeni.net/resources/rspec/ext/emulab/1"
>
> This will be temporary space that will disappear when the experiment is
> terminated, and it will work across the x86 clusters (not on the Cloudlab
> moonshot cluster, not yet).

Thanks, I think this is what I needed. Is there any limitation on how big I can allocate this blockstore?

> Note that we might still need to do another snapshot to update the Emulab
> clientside on your disk image, if your image does not have recent enough
> code, but that is easy to do too.

Ok, let me know on how to update my emulab client before taking the next snapshot.

Regards,

Asep

Leigh Stoller

unread,
Feb 10, 2015, 1:24:53 PM2/10/15
to Asep Noor Mukhdari Sutrisna, cloudla...@googlegroups.com
> Ok, let me know on how to update my emulab client before taking the next snapshot.

We only need to do that if your client side is out of date. Go ahead and
give it a try and if it fails, we will go from there.

You can make the blockstore as big the amount of space left on the local
disk. On the APT cluster that is about 470GB. But it might be different
on the other x86 clusters, so for now best to pick a size that is not too
much bigger then what you think you might need.

If you need more then 470GB … well lets hope not. :-)

Leigh





Asep Noor Mukhdari Sutrisna

unread,
Feb 10, 2015, 2:13:52 PM2/10/15
to Leigh Stoller, cloudla...@googlegroups.com
Hi,

> On Feb 10, 2015, at 7:24 PM, Leigh Stoller <lbst...@gmail.com> wrote:
>
>> Ok, let me know on how to update my emulab client before taking the next snapshot.
>
> We only need to do that if your client side is out of date. Go ahead and
> give it a try and if it fails, we will go from there.

I have modified my RSpec to add a 300GB blockstore, but when I instantiated it, I see no new mount point (using ‘mount’ and 'df').
Is the mount point must be an existing path in the image or will it be created automatically? Also, is it supposed to be automatically mounted or I have to do it manually?

> You can make the blockstore as big the amount of space left on the local
> disk. On the APT cluster that is about 470GB. But it might be different
> on the other x86 clusters, so for now best to pick a size that is not too
> much bigger then what you think you might need.
>
> If you need more then 470GB … well lets hope not. :-)

300 GB should be sufficient, I’m referring to this node requirement: http://tomato-lab.org/join/node_requirements/

Thanks,

Asep

Leigh Stoller

unread,
Feb 10, 2015, 2:25:10 PM2/10/15
to Asep Noor Mukhdari Sutrisna, cloudla...@googlegroups.com
> I have modified my RSpec to add a 300GB blockstore, but when I instantiated it, I see no new mount point (using ‘mount’ and 'df').
> Is the mount point must be an existing path in the image or will it be created automatically? Also, is it supposed to be automatically mounted or I have to do it manually?

It should all have been done. Is this experiment still active,
can you send me the URL to the status page?

Most likely we just need the client side update, I will check.

Leigh





Asep Noor Mukhdari Sutrisna

unread,
Feb 10, 2015, 2:27:44 PM2/10/15
to Leigh Stoller, cloudla...@googlegroups.com
Yes, it's still active
Here the URL: https://www.cloudlab.us/status.php?uuid=80877a3e-b154-11e4-97ea-38eaa71273fa

Thanks,

Asep

Leigh Stoller

unread,
Feb 10, 2015, 3:36:00 PM2/10/15
to Asep Noor Mukhdari Sutrisna, cloudla...@googlegroups.com
Ah, I found the bug in the server code that caused your blockstore
to be ignored. I will have a fix for that installed in a bit, and then
you can recreate that experiment to test my fix.

Stay tuned …

Leigh





Leigh Stoller

unread,
Feb 10, 2015, 3:42:34 PM2/10/15
to Asep Noor Mukhdari Sutrisna, cloudla...@googlegroups.com
> Ah, I found the bug in the server code that caused your blockstore
> to be ignored. I will have a fix for that installed in a bit, and then
> you can recreate that experiment to test my fix.
>
> Stay tuned …

Okay, fix installed, please retry that profile. Thanks!

Leigh





Asep Noor Mukhdari Sutrisna

unread,
Feb 10, 2015, 3:58:01 PM2/10/15
to Leigh Stoller, cloudla...@googlegroups.com
Hmm, I think I still have no new disk mounted.
Fyi I don’t have /tomato path in the image (I assumed it’d be created automatically)

Here’s the status URL:
https://www.cloudlab.us/status.php?uuid=e4db5f29-b165-11e4-97ea-38eaa71273fa

Regards,

Asep

Leigh Stoller

unread,
Feb 10, 2015, 3:59:44 PM2/10/15
to Asep Noor Mukhdari Sutrisna, cloudla...@googlegroups.com
> Hmm, I think I still have no new disk mounted.
> Fyi I don’t have /tomato path in the image (I assumed it’d be created automatically)

Yep, now we are at the out of date client side. I am going to
update that on your node, reboot to make sure it works, and then
send you the recipe.

Leigh





Leigh Stoller

unread,
Feb 10, 2015, 6:12:31 PM2/10/15
to Asep Noor Mukhdari Sutrisna, cloudla...@googlegroups.com
> Yep, now we are at the out of date client side. I am going to
> update that on your node, reboot to make sure it works, and then
> send you the recipe.

Okay, one of our guys (thanks Kirk!) was able to fix the problem that
Debian was having, so we are good to go.

Here is the recipe for updating the client side on your Debian image. You
will want to do this and then take a snapshot. Then retest with the block
store. The first two lines are to save off the passwd/group files that have
the accounts you added. Just in case ...

cd /etc
cp -rp emulab emulab.save
cd /var/tmp
git clone git://git-public.flux.utah.edu/emulab-devel.git
mkdir obj
cd obj
../emulab-devel/clientside/configure --with-TBDEFS=../emulab-devel/defs-utahclient
make
cd tmcc/linux
make simple-install






Asep Noor Mukhdari Sutrisna

unread,
Feb 10, 2015, 6:53:12 PM2/10/15
to Leigh Stoller, cloudla...@googlegroups.com
Hi,
I’ve created a new snapshot with updated emulab-client, and the block store finally works!
Great job, thanks for the support Leigh & Kirk.

Regards,

Asep

Leigh Stoller

unread,
Feb 10, 2015, 6:58:26 PM2/10/15
to Asep Noor Mukhdari Sutrisna, cloudla...@googlegroups.com
> I’ve created a new snapshot with updated emulab-client, and the block store finally works!

Ah, excellent news!

Leigh





Suli Yang

unread,
Feb 15, 2015, 10:39:52 PM2/15/15
to cloudla...@googlegroups.com, asep...@gmail.com

Hi,

I have a similar question:

I will to increase the size of my root partitions of the node in my profile (split-io1). For now it's just 16GB, is there a way to increase it  to, say, 128GB?

Also, when I take a snapshot (or a clone), how to make sure it clones not only the root partition, but some other disks I have mounted?

Thanks!

Suli

Leigh Stoller

unread,
Feb 16, 2015, 1:13:57 PM2/16/15
to Suli Yang, cloudla...@googlegroups.com, asep...@gmail.com
> I will to increase the size of my root partitions of the node in my
> profile (split-io1). For now it's just 16GB, is there a way to increase
> it to, say, 128GB?

Hi, the root partition cannot be increased, but you are free to add more
filesystems using the unused space on the first drive, about 470 GB on APT
cluster.

The same questions apply though: Do you need space for a lot of read-only
data or do you need read-write space that will change a lot or a little.

Once we have an idea of what you need, we can figure out the best way
to get you the extra space.

Leigh




杨苏立 Yang Su Li

unread,
Feb 16, 2015, 2:24:24 PM2/16/15
to Leigh Stoller, cloudla...@googlegroups.com, asep...@gmail.com
Hi,

I need read/write spaces. Mostly I am going to read a few large files (no updates) and create/write some other files (the created files will be deleted short after), and for my experiment those files need to be in the same file system.

Thanks.

Suli
--
Suli Yang

Department of Physics
University of Wisconsin Madison

4257 Chamberlin Hall
Madison WI 53703

Leigh Stoller

unread,
Feb 16, 2015, 2:29:34 PM2/16/15
to 杨苏立 Yang Su Li, cloudla...@googlegroups.com, asep...@gmail.com
> I need read/write spaces. Mostly I am going to read a few large files (no updates) and create/write some other files (the created files will be deleted short after), and for my experiment those files need to be in the same file system.

How big are the large files and do the files need to be local or can
they be on NAS disk?

Leigh





杨苏立 Yang Su Li

unread,
Feb 16, 2015, 2:31:52 PM2/16/15
to Leigh Stoller, cloudlab-users, asep.noor
read only files are about 8G each, and I have 8 of them.

write files are mostly 1G each, also about 8 of them.

These files need to be on local disk. 

Thanks

Suli

Leigh Stoller

unread,
Feb 16, 2015, 7:20:53 PM2/16/15
to 杨苏立 Yang Su Li, cloudlab-users
> read only files are about 8G each, and I have 8 of them.
> write files are mostly 1G each, also about 8 of them.
> These files need to be on local disk.

Okay, the first thing to do is create a profile, if you have not already
created one specifically for this purpose, and create an experiment using
that profile.

Then on the node, you want to do this:

pc> sudo mkdir /somedir
pc> sudo /usr/local/etc/emulab/mkextrafs.pl /somedir

If you get a strange GPT error, do this:

pc> sudo apt-get install gdisk
pc> sudo sgdisk --zap /dev/sda
pc> sudo /usr/local/etc/emulab/mkextrafs.pl /somedir

Populate /somedir with your files, and then let me know when you are ready
so I can change some state in the database to make sure that the extra
partition is saved. Then you will do a snapshot on the profile to create a
new disk image that includes the extra partition.

Leigh





杨苏立 Yang Su Li

unread,
Feb 16, 2015, 7:52:42 PM2/16/15
to Leigh Stoller, cloudlab-users
Well, I already have my files populated in some dir (more specifically, /mnt/ext4, which is the mount point of ext4)

Can I preserve what's in there. Or do I have to create a new directory and repopulate the files?

Also: I need the directory to be in a separate disk, not just a separate partition.  

Thanks.

Suli


Leigh Stoller

unread,
Feb 16, 2015, 7:57:23 PM2/16/15
to 杨苏立 Yang Su Li, cloudlab-users
> Also: I need the directory to be in a separate disk, not just a separate
> partition.

Sorry, at this time we cannot support that. The best thing to do is what I
suggest, and then use a script to mount and copy the files out to the
second disk before you start your software running. Eight 8GB files will
copy pretty quickly on these machines.

Leigh





杨苏立 Yang Su Li

unread,
Feb 16, 2015, 9:03:41 PM2/16/15
to Leigh Stoller, cloudlab-users
OK. I am done with populating the files. 

Could you please update the database? Thanks!

Suli

Leigh Stoller

unread,
Feb 17, 2015, 9:58:43 AM2/17/15
to 杨苏立 Yang Su Li, cloudlab-users
> Could you please update the database? Thanks!

Okay, go ahead and do a snapshot in the web interface. Once it is done
create a new experiment and we can see if it got everything okay.

Leigh





杨苏立 Yang Su Li

unread,
Feb 17, 2015, 1:29:47 PM2/17/15
to Leigh Stoller, cloudlab-users
I tried snapshot twice, but both attempts failed. The error message I got is:

Checking for feature ImageProvenance.
reboot (apt080): Attempting to reboot ...
reboot (apt080): Successful!
reboot: Done. There were 0 failures.
   apt080
Waiting for nodes to come up.
All nodes are up.
apt080: started image capture for '/.amd_mnt/ops/proj/splitio-PG0/images/split-io1.ndz', waiting up to 72 minutes total or 8 minutes idle.
apt080: still waiting ... it has been 2 minutes. Current image size: 656408576 bytes.
apt080: still waiting ... it has been 4 minutes. Current image size: 1485832192 bytes.
apt080: still waiting ... it has been 6 minutes. Current image size: 2454716416 bytes.
apt080: still waiting ... it has been 8 minutes. Current image size: 3254779904 bytes.
apt080: still waiting ... it has been 10 minutes. Current image size: 4127195136 bytes.
apt080: still waiting ... it has been 12 minutes. Current image size: 4657774592 bytes.
apt080: still waiting ... it has been 14 minutes. Current image size: 4847501312 bytes.
apt080: still waiting ... it has been 16 minutes. Current image size: 5606735872 bytes.
apt080: still waiting ... it has been 18 minutes. Current image size: 6331301888 bytes.

Leigh Stoller

unread,
Feb 17, 2015, 1:31:15 PM2/17/15
to 杨苏立 Yang Su Li, cloudlab-users
> I tried snapshot twice, but both attempts failed. The error message I got is:

Yep, there is a problem with the image creation tools which we
are looking into. Go ahead and extend your experiment for a another
day or two while we look into what is going wrong.

Thanks!
Leigh





Mike Hibler

unread,
Feb 17, 2015, 1:52:16 PM2/17/15
to Leigh Stoller, ????????? Yang Su Li, cloudlab-users
Suli, I am going to need to login to this machine and run the image
tools by hand. Is that okay? I won't delete or change anything.
> --
> You received this message because you are subscribed to the Google Groups "cloudlab-users" group.
> To unsubscribe from this group and stop receiving emails from it, send an email to cloudlab-user...@googlegroups.com.
> To post to this group, send email to cloudla...@googlegroups.com.
> To view this discussion on the web visit https://groups.google.com/d/msgid/cloudlab-users/E1F48718-6C4A-4EBE-AC2C-F0B28BEB0666%40gmail.com.
> For more options, visit https://groups.google.com/d/optout.

Suli Yang

unread,
Feb 17, 2015, 2:06:56 PM2/17/15
to Mike Hibler, Leigh Stoller, cloudlab-users
Sure. Go ahead.

Thanks!

From: Mike Hibler
Sent: ‎2/‎17/‎2015 12:52 PM
To: Leigh Stoller
Cc: ????????? Yang Su Li; cloudlab-users
Subject: Re: [cloudlab-users] Disk space for physical node

Mike Hibler

unread,
Feb 17, 2015, 4:00:27 PM2/17/15
to ????????? Yang Su Li, Leigh Stoller, cloudlab-users
Okay, try again. We were imposing a system-wide 6 GiB maximum image size,
and your image apparently just crossed that line. I increased the value
to 10 GiB on Apt.
> --
> You received this message because you are subscribed to the Google Groups "cloudlab-users" group.
> To unsubscribe from this group and stop receiving emails from it, send an email to cloudlab-user...@googlegroups.com.
> To post to this group, send email to cloudla...@googlegroups.com.
> To view this discussion on the web visit https://groups.google.com/d/msgid/cloudlab-users/CABK4GYP9tuRZpdarq0GjrZvC7nRxQptoaXh%3D7gxZaA7eVuo8oQ%40mail.gmail.com.

Asep Noor Mukhdari Sutrisna

unread,
Apr 22, 2015, 2:31:58 PM4/22/15
to Leigh Stoller, cloudla...@googlegroups.com

>
> So the most portable (within Protogeni) way to do this is this syntax
> in your rspec:
>
> <node client_id="n1" exclusive="true">
> <emulab:blockstore name="b1"
> size="10GB"
> class="local"
> mountpoint="/foo" />
> </node>
>
> You might need to add this in the rspec header:
>
> xmlns:emulab="http://www.protogeni.net/resources/rspec/ext/emulab/1"

Hi,
I can get blockstore working by manually editing RSpec file.
How do I specify emulab:blockstore using geni-lib? I couldn’t find sample geni-lib code for this.
Thank you.

Regards,

Asep

Sarah Edwards

unread,
Apr 22, 2015, 2:36:59 PM4/22/15
to Asep Noor Mukhdari Sutrisna, Sarah Edwards, Leigh Stoller, cloudla...@googlegroups.com, Nicholas Bastin
Hi Asep,

I'm adding Nick to the thread so he can address this.

Cheers,
Sarah
> --
> You received this message because you are subscribed to the Google Groups "cloudlab-users" group.
> To unsubscribe from this group and stop receiving emails from it, send an email to cloudlab-user...@googlegroups.com.
> To post to this group, send email to cloudla...@googlegroups.com.
> To view this discussion on the web visit https://groups.google.com/d/msgid/cloudlab-users/2BC4B9E2-C35B-4270-989A-EA321A8DF930%40gmail.com.
> For more options, visit https://groups.google.com/d/optout.

*******************************************************************************
Sarah Edwards
GENI Project Office

BBN Technologies
Cambridge, MA
phone: (617) 873-2329
email: sedw...@bbn.com





Nicholas Bastin

unread,
Apr 23, 2015, 1:33:29 AM4/23/15
to Sarah Edwards, Asep Noor Mukhdari Sutrisna, Leigh Stoller, cloudla...@googlegroups.com
On Wed, Apr 22, 2015 at 8:36 AM, Sarah Edwards <sedw...@bbn.com> wrote:
> On Apr 22, 2015, at 2:31 PM, Asep Noor Mukhdari Sutrisna <asep...@gmail.com> wrote:
> I can get blockstore working by manually editing RSpec file.
> How do I specify emulab:blockstore using geni-lib? I couldn’t find sample geni-lib code for this.

I have just added support for this in geni-lib - if you're using the geni-lib provided inside the CloudLab UI you will have to wait for it to be updated internally.

If you're using your own geni-lib you can update from your clone with:

hg pull -u
hg update -C 0.9-DEV
sudo python setup.py install

(or whatever the right installation instruction is for your OS of choice, based on however you originally installed it)

The syntax looks like:

import geni.rspec.pg as PG
import geni.rspec.clext   # This adds the CloudLab extensions

r = PG.Request()
n = PG.RawPC("test")
n.Blockstore("bs-name", 10, "/mnt/foo")
r.addResource(n)

The Blockstore is automatically added to the node - the arguments are the name, the size in gigabytes, and the requested mount point.  You can add as many blockstores as you like to a given node, although there is no (easy) way to remove them at the moment.

--
Nick

Leigh Stoller

unread,
Apr 23, 2015, 10:06:18 AM4/23/15
to Nicholas Bastin, Sarah Edwards, Asep Noor Mukhdari Sutrisna, cloudla...@googlegroups.com
> I have just added support for this in geni-lib - if you're using the geni-lib provided inside the CloudLab UI you will have to wait for it to be updated internally.
>
> If you're using your own geni-lib you can update from your clone with:
>
> hg pull -u
> hg update -C 0.9-DEV
> sudo python setup.py install

Thanks Nick! As soon as someone here merges your branch into our
branch, I will install it.


Leigh





Asep Noor Mukhdari Sutrisna

unread,
Apr 23, 2015, 10:09:26 AM4/23/15
to Nicholas Bastin, Sarah Edwards, Leigh Stoller, cloudla...@googlegroups.com
Hi,

Thank you Nick, I’m using my own geni-lib.
I’ve tried the code, and the resulted RSpec is like this:

  <node client_id="thm3" exclusive="true">
    <sliver_type name="raw">
    </sliver_type>
    <services>
            <execute shell="sh" command="sudo apt-get update &amp;&amp; sudo apt-get -y upgrade"/>
    </services>
    <ns0:blockstore xmlns:ns0="http://www.protogeni.net/resources/rspec/ext/emulab/1" name="bs-tomato" size="300GB" mountpoint="/tomato" class="local"/>
  </node>
</rspec>

it became: <ns0:blockstore …> 
But, I couldn’t find the block storage mounted in the node instances.
Something wrong?

regards,

Asep

Leigh Stoller

unread,
Apr 23, 2015, 10:12:33 AM4/23/15
to Asep Noor Mukhdari Sutrisna, Nicholas Bastin, Sarah Edwards, cloudla...@googlegroups.com
> it became:
> But, I couldn’t find the block storage mounted in the node instances.
> Something wrong?

Hard to say unless you tell us what experiment you have running that
exhibits this problem. A URL to the status page would be most useful
to us.

Leigh





Asep Noor Mukhdari Sutrisna

unread,
Apr 23, 2015, 10:24:01 AM4/23/15
to Leigh Stoller, Nicholas Bastin, Sarah Edwards, cloudla...@googlegroups.com
I’m using geni-lib to instantiate the RSpec using geni portal credentials, so I’m not sure which status page?
This one? https://portal.geni.net/secure/listresources.php?slice_id=54290cb6-a607-437b-afe4-bd3444709c8a&am_id[]=173

And here’s the manifest file:

<?xml version="1.0"?>
<rspec xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns:client="http://www.protogeni.net/resources/rspec/ext/client/1" xmlns="http://www.geni.net/resources/rspec/3" xsi:schemaLocation="http://www.geni.net/resources/rspec/3 http://www.geni.net/resources/rspec/3/manifest.xsd" type="manifest" expires="2015-04-29T09:23:17Z">
<node client_id="thm3" exclusive="true" component_id="urn:publicid:IDN+apt.emulab.net+node+apt033" component_manager_id="urn:publicid:IDN+apt.emulab.net+authority+cm" sliver_id="urn:publicid:IDN+apt.emulab.net+sliver+17251">
<sliver_type name="raw-pc">
<execute shell="sh" command="sudo touch /opt/created"/>
<execute shell="sh" command="sudo apt-get update &amp;&amp; sudo apt-get -y upgrade"/>
<login authentication="ssh-keys" hostname="apt033.apt.emulab.net" port="22" username="sutrisna"/>
<rs:console xmlns:rs="http://www.protogeni.net/resources/rspec/ext/emulab/1" server="boss.apt.emulab.net"/>
</services>
<ns0:blockstore xmlns:ns0="http://www.protogeni.net/resources/rspec/ext/emulab/1" name="bs-tomato" size="300GB" mountpoint="/tomato" class="local"/>
<rs:vnode xmlns:rs="http://www.protogeni.net/resources/rspec/ext/emulab/1" name="apt033"/>
<host name="thm3.thm-nodes01.ToMaTo-KL-PG0.apt.emulab.net"/>
</node>
<rs:site_info xmlns:rs="http://www.protogeni.net/resources/rspec/ext/site-info/1">
<rs:location country="US" latitude="40.750714" longitude="-111.893288"/>
</rs:site_info>
</rspec>


Regards,

Asep

Leigh Stoller

unread,
Apr 23, 2015, 10:27:35 AM4/23/15
to Asep Noor Mukhdari Sutrisna, Nicholas Bastin, Sarah Edwards, cloudla...@googlegroups.com
> And here’s the manifest file:

Yep, that is what I need. Please extend this sliver for a day or
two so I have to time to look into it.

Thanks!
Leigh





Leigh Stoller

unread,
Apr 23, 2015, 3:30:32 PM4/23/15
to Asep Noor Mukhdari Sutrisna, Nicholas Bastin, Sarah Edwards, cloudla...@googlegroups.com
>> And here’s the manifest file:
>>
> Yep, that is what I need. Please extend this sliver for a day or
> two so I have to time to look into it.

Ah, this image was created in January, an eternity in CloudLab development
time :-)

The main problem, is that there is a stale line in your /etc/fstab
that was left there by a buggy client side. So the first thing to
do is remove the line with /dev/emulab/bs1 from /etc/fstab.

Next, the image has old client side code on it and needs to be updated. The
easiest way to do this is to follow this recipe:

cd /etc/emulab
sudo mkdir Save
sudo cp passwd shadow group gshadow Save
cd /tmp
git clone git://git-public.flux.utah.edu/emulab-devel.git
mkdir obj
cd obj
../emulab-devel/clientside/configure --with-TBDEFS=../emulab-devel/defs-utahclient
gmake client
sudo gmake client-install
cd /etc/emulab/Save
sudo cp passwd shadow group gshadow ..
sudo reboot

Let it reboot, to make sure everything is working okay. Then you can take a
new snapshot.

Leigh





Asep Noor Mukhdari Sutrisna

unread,
Apr 23, 2015, 5:30:55 PM4/23/15
to Leigh Stoller, Nicholas Bastin, Sarah Edwards, cloudla...@googlegroups.com
>
> Next, the image has old client side code on it and needs to be updated. The
> easiest way to do this is to follow this recipe:
>
> cd /etc/emulab
> sudo mkdir Save
> sudo cp passwd shadow group gshadow Save
> cd /tmp
> git clone git://git-public.flux.utah.edu/emulab-devel.git
> mkdir obj
> cd obj
> ../emulab-devel/clientside/configure --with-TBDEFS=../emulab-devel/defs-utahclient
> gmake client
> sudo gmake client-install

Hi Leigh,

I instantiate a new node in cloudlab to make new snapshot, got this error in this step (sudo make client-install).
I couldn’t find game in my Debian disk image, so I use make instead.

*** WARNING: no libdevmapper, not building disk-agent
make[2]: Leaving directory `/tmp/obj/event/disk-agent'
make[2]: Entering directory `/tmp/obj/event/trafgen'
make trafgen
make[3]: Entering directory `/tmp/obj/event/trafgen'
make[3]: `trafgen' is up to date.
make[3]: Leaving directory `/tmp/obj/event/trafgen'
/usr/bin/install -c -m 755 trafgen /usr/local/etc/emulab/trafgen
make[2]: Leaving directory `/tmp/obj/event/trafgen'
make[1]: Leaving directory `/tmp/obj/event'
make[1]: Entering directory `/tmp/obj/tmcc'
make: Entering an unknown directory
make: *** install: No such file or directory. Stop.
make: Leaving an unknown directory
make[1]: *** [client-install] Error 2
make[1]: Leaving directory `/tmp/obj/tmcc'
make: *** [tmcc/client-install.MAKE] Error 2

> cd /etc/emulab/Save
> sudo cp passwd shadow group gshadow ..
> sudo reboot
>
> Let it reboot, to make sure everything is working okay. Then you can take a
> new snapshot.


I tried to continue anyway, rebooted and make snapshot. However creating snapshot failed.
Here’s status URL: https://www.cloudlab.us/status.php?uuid=98e9eb81-e9f0-11e4-bd13-38eaa71273fa



regards,

Asep

Leigh Stoller

unread,
Apr 23, 2015, 5:44:12 PM4/23/15
to Asep Noor Mukhdari Sutrisna, Nicholas Bastin, Sarah Edwards, cloudla...@googlegroups.com
Hmm, this means that our client side does not build and install on
Debian. Not entirely sure why that is, since we have a std debian image.
I will forward to others in the group to see if someone knows what
is up.

> *** WARNING: no libdevmapper, not building disk-agent
> make[2]: Leaving directory `/tmp/obj/event/disk-agent'
> make[2]: Entering directory `/tmp/obj/event/trafgen'
> make trafgen
> make[3]: Entering directory `/tmp/obj/event/trafgen'
> make[3]: `trafgen' is up to date.
> make[3]: Leaving directory `/tmp/obj/event/trafgen'
> /usr/bin/install -c -m 755 trafgen /usr/local/etc/emulab/trafgen
> make[2]: Leaving directory `/tmp/obj/event/trafgen'
> make[1]: Leaving directory `/tmp/obj/event'
> make[1]: Entering directory `/tmp/obj/tmcc'
> make: Entering an unknown directory
> make: *** install: No such file or directory. Stop.
> make: Leaving an unknown directory
> make[1]: *** [client-install] Error 2
> make[1]: Leaving directory `/tmp/obj/tmcc'
> make: *** [tmcc/client-install.MAKE] Error 2
>

Leigh





Asep Noor Mukhdari Sutrisna

unread,
Apr 23, 2015, 6:12:04 PM4/23/15
to Leigh Stoller, Nicholas Bastin, Sarah Edwards, cloudla...@googlegroups.com

> On Apr 23, 2015, at 11:44 PM, Leigh Stoller <lbst...@gmail.com> wrote:
>
> Hmm, this means that our client side does not build and install on
> Debian. Not entirely sure why that is, since we have a std debian image.
> I will forward to others in the group to see if someone knows what
> is up.

Thank you.
Actually, If I specify the blockstore name to “b1” then it can be mounted perfectly.
There are 2 entries of /dev/emulab/b1 in fstab though, but well, it’s sufficient for me now just to get the extra disk space mounted.

regards,

Asep

Leigh Stoller

unread,
Apr 23, 2015, 6:14:17 PM4/23/15
to Asep Noor Mukhdari Sutrisna, Nicholas Bastin, Sarah Edwards, cloudla...@googlegroups.com
> Actually, If I specify the blockstore name to “b1” then it can be mounted perfectly.
> There are 2 entries of /dev/emulab/b1 in fstab though, but well, it’s sufficient for me now just to get the extra disk space mounted.

Well, thats clever! Good job.

Leigh





Asep Noor Mukhdari Sutrisna

unread,
May 6, 2015, 2:25:37 PM5/6/15
to Leigh Stoller, Nicholas Bastin, Sarah Edwards, cloudla...@googlegroups.com

> On Apr 24, 2015, at 12:14 AM, Leigh Stoller <lbst...@gmail.com> wrote:
>
>> Actually, If I specify the blockstore name to “b1” then it can be mounted perfectly.
>> There are 2 entries of /dev/emulab/b1 in fstab though, but well, it’s sufficient for me now just to get the extra disk space mounted.
>

Hi guys,
After several experiments, I just realized that when I request more than one node with blockstore on each node, only the first node gets the block storage mounted.
The rest don’t have blockstorage fstab entry and no log indicating error in /var/emulab/logs/*
Attached here's the manifest.

Thanks,

Asep

SLC-xtLps.xml

Leigh Stoller

unread,
May 6, 2015, 2:30:23 PM5/6/15
to Asep Noor Mukhdari Sutrisna, cloudlab-users
> Hi guys, After several experiments, I just realized that when I request
> more than one node with blockstore on each node, only the first node gets
> the block storage mounted. The rest don’t have blockstorage fstab entry
> and no log indicating error in /var/emulab/logs/* Attached here's the
> manifest.

Hi, can you point use to a current experiment and node; that is the
quickest way for us to look into things.

Thanks!
Leigh

Asep Noor Mukhdari Sutrisna

unread,
May 6, 2015, 2:35:02 PM5/6/15
to Leigh Stoller, cloudlab-users
>
> Hi, can you point use to a current experiment and node; that is the
> quickest way for us to look into things.

GENI Portal url: https://portal.geni.net/secure/slice.php?slice_id=b19b90e5-274d-4759-8097-8bb72a2b2e5a

Node with correct block storage mounted: apt027.apt.emulab.net
Other nodes with no block storage: 'apt022.apt.emulab.net', 'apt004.apt.emulab.net', 'apt005.apt.emulab.net

regards,

Asep

Leigh Stoller

unread,
May 6, 2015, 3:00:10 PM5/6/15
to Asep Noor Mukhdari Sutrisna, cloudlab-users, Kirk Webb
> Hi guys, After several experiments, I just realized that when I request
> more than one node with blockstore on each node, only the first node gets
> the block storage mounted. The rest don’t have blockstorage fstab entry
> and no log indicating error in /var/emulab/logs/* Attached here's the
> manifest.

Hmm, I did not realize this, but the blockstore name (b1) has to be
unique within the rspec (experiment). I would suggest just picking
unique names, but then I remembered that you had a problem with your
image and were using b1 to bypass the problem with /etc/fstab?

I think you will have to fix up your image now …

Thanks!
Leigh

Asep Noor Mukhdari Sutrisna

unread,
May 6, 2015, 3:38:18 PM5/6/15
to Leigh Stoller, cloudlab-users, Kirk Webb

> On May 6, 2015, at 9:00 PM, Leigh Stoller <lbst...@gmail.com> wrote:
>
>> Hi guys, After several experiments, I just realized that when I request
>> more than one node with blockstore on each node, only the first node gets
>> the block storage mounted. The rest don’t have blockstorage fstab entry
>> and no log indicating error in /var/emulab/logs/* Attached here's the
>> manifest.
>
> Hmm, I did not realize this, but the blockstore name (b1) has to be
> unique within the rspec (experiment). I would suggest just picking
> unique names, but then I remembered that you had a problem with your
> image and were using b1 to bypass the problem with /etc/fstab?

Ah, so the blockstore name must be unique.
Actually I have removed b1 from /etc/fstab in my current disk image, so it works if I provide unique blockstore name on each node.

> I think you will have to fix up your image now …

Just in case I have another problem with my current disk image (debian 7.6),
How do I get URL of this DEB77-64-STD disk image?
urn:publicid:IDN+emulab.net+image+emulab-ops:DEB77-64-STD

I found the image from GENI portal, but I can’t instantiate it in cloudlab.

Thanks,

Asep

Leigh Stoller

unread,
May 6, 2015, 4:17:10 PM5/6/15
to Asep Noor Mukhdari Sutrisna, cloudlab-users, Kirk Webb
> Just in case I have another problem with my current disk image (debian 7.6),
> How do I get URL of this DEB77-64-STD disk image?
> urn:publicid:IDN+emulab.net+image+emulab-ops:DEB77-64-STD

Hi. I can place the DEB77 image on the APT cluster, but as with the
DEB76 image, we do not actually support it. It is provided by the
ugent people. Just looking at the date of the image, I can say right
away that the client side is out of date, but maybe less out of
date then the 76 image.

Leigh





Brecht Vermeulen

unread,
May 6, 2015, 4:20:36 PM5/6/15
to cloudla...@googlegroups.com


Leigh Stoller schreef op 6/05/2015 om 22:17:
yes, I need to work on it (the Debian 7.7 image is from Oct 21st ,with
the emulab-devel tools of that moment).
Are you more interested in an updated Debian 7.8 or a Debian 8 image ?

Brecht

Asep Noor Mukhdari Sutrisna

unread,
May 6, 2015, 4:37:50 PM5/6/15
to Brecht Vermeulen, cloudla...@googlegroups.com
>
> yes, I need to work on it (the Debian 7.7 image is from Oct 21st ,with
> the emulab-devel tools of that moment).
> Are you more interested in an updated Debian 7.8 or a Debian 8 image ?
Hi,
Basically I need Proxmox VE disk image which I believe runs on Debian.
I haven’t tested the software that I’m running on Debian 8, so maybe Debian 7.8 would be a safer choice.
Thank you in advance.

regards,

Asep
Reply all
Reply to author
Forward
0 new messages