Extending filesystem on LVM2

485 views
Skip to first unread message

Pete

unread,
Apr 1, 2015, 6:21:05 PM4/1/15
to securit...@googlegroups.com
Hello Doug et al.,

I've recently expanded the storage available on one of my sensors. This particular one is on a virtual machine, but the following is applicable to bare metal as well, as long as it supports hot-swap HDDs. (FWIW, it's running great in the VM with 128GB RAM and 24 processors, and using it all with 10 bros and 8 snorts!)

I've read through https://github.com/Security-Onion-Solutions/security-onion/wiki/NewDisk but that's more about replacing a disk than adding to an existing one.

With the following steps, I was able to increase the available storage without stopping the sensors or rebooting the system. I initially partitioned the disk with 400GB for root and 4TB as an LVM2 physical volume (PV). The PV has a volume group (VG) with two logical volumes (LV): 100GB for swap and the remainder for /nsm (formatted as ext4). I quickly outgrew the /nsm filesystem, so added a 10TB device (SAN). On Ubuntu 12.04.5, you have to manually trigger a SCSI bus rescan, where on RHEL6, for example, it just shows up after a few minutes.

The first step is determining which SCSI "host" is used for the existing disk. That's done with the lsscsi command, which is not always installed:

$ sudo apt-get install lsscsi
$ sudo lsscsi

This will print out all the devices connected using the SCSI subsystem. The host is the first digit in the list of 4 digits separated by colons ([host:bus:target:lun]), for example, if /dev/sda is at [2:0:0:0], then we need to rescan host2, which is done using this command:

$ echo "- - -" | sudo tee /sys/class/scsi_host/host2/scan

After a short while, you should see a new sd? device appear in /dev. In my case, it was /dev/sdb. You can verify it has the size you expected using blockdev (getsize64 output is in bytes):

$ sudo blockdev --getsize64 /dev/sdb

My volume group is named nsm-vg, and logical volume hosting the /nsm partition is nsm-lv. To add the new drive and grow the LV and filesystem, it's as simple as the following three commands (run these within a screen or tmux session if you're connected via SSH, replacing the device, vg, and lv values with your own):

$ sudo vgextend nsm-vg /dev/sdb
$ sudo lvextend -l +100%FREE /dev/nsm-vg/nsm-lv
$ sudo resize2fs /dev/nsm-vg/nsm-lv

The last command takes quite a bit of time for larger drives; so please be patient.


Now for my question:

What steps do I need to take to allow the sensors to use the newly available disk? I'd rather not run sosetup again and lose all the sensor data and logs I already have.

I see the disk usage for ELSA is in /etc/elsa_node.conf as "log_size_limit", calculated by sosetup when you enter the number of GB to use for ELSA logs. If I change that by manually editing that file, do I need to do anything to make it effective (eg: sudo service sphinxsearch restart)?

How about for the other sensor processes, like argus, netsniff-ng, sguil, etc.? Do the cleanup scripts check available vs. total ratio on each call, or are the sizes fixed at sosetup time?

I'll look into sending a pull request to the github wiki, say for https://github.com/Security-Onion-Solutions/security-onion/wiki/ExtendLV, once I have all the info and am able to test everything works.

Thanks,
--
Pete

Doug Burks

unread,
Apr 2, 2015, 8:35:30 AM4/2/15
to securit...@googlegroups.com
Hi Pete,

Replies inline.

On Wed, Apr 1, 2015 at 6:21 PM, Pete <peti...@gmail.com> wrote:
> Hello Doug et al.,
>
> I've recently expanded the storage available on one of my sensors. This particular one is on a virtual machine, but the following is applicable to bare metal as well, as long as it supports hot-swap HDDs. (FWIW, it's running great in the VM with 128GB RAM and 24 processors, and using it all with 10 bros and 8 snorts!)
>
> I've read through https://github.com/Security-Onion-Solutions/security-onion/wiki/NewDisk but that's more about replacing a disk than adding to an existing one.
>
> With the following steps, I was able to increase the available storage without stopping the sensors or rebooting the system. I initially partitioned the disk with 400GB for root and 4TB as an LVM2 physical volume (PV). The PV has a volume group (VG) with two logical volumes (LV): 100GB for swap and the remainder for /nsm (formatted as ext4). I quickly outgrew the /nsm filesystem, so added a 10TB device (SAN). On Ubuntu 12.04.5, you have to manually trigger a SCSI bus rescan, where on RHEL6, for example, it just shows up after a few minutes.
>
> The first step is determining which SCSI "host" is used for the existing disk. That's done with the lsscsi command, which is not always installed:
>
> $ sudo apt-get install lsscsi
> $ sudo lsscsi
>
> This will print out all the devices connected using the SCSI subsystem. The host is the first digit in the list of 4 digits separated by colons ([host:bus:target:lun]), for example, if /dev/sda is at [2:0:0:0], then we need to rescan host2, which is done using this command:
>
> $ echo "- - -" | sudo tee /sys/class/scsi_host/host2/scan
>
> After a short while, you should see a new sd? device appear in /dev. In my case, it was /dev/sdb. You can verify it has the size you expected using blockdev (getsize64 output is in bytes):
>
> $ sudo blockdev --getsize64 /dev/sdb
>
> My volume group is named nsm-vg, and logical volume hosting the /nsm partition is nsm-lv. To add the new drive and grow the LV and filesystem, it's as simple as the following three commands (run these within a screen or tmux session if you're connected via SSH, replacing the device, vg, and lv values with your own):
>
> $ sudo vgextend nsm-vg /dev/sdb
> $ sudo lvextend -l +100%FREE /dev/nsm-vg/nsm-lv
> $ sudo resize2fs /dev/nsm-vg/nsm-lv
>
> The last command takes quite a bit of time for larger drives; so please be patient.

Nice job!

> Now for my question:
>
> What steps do I need to take to allow the sensors to use the newly available disk? I'd rather not run sosetup again and lose all the sensor data and logs I already have.
>
> I see the disk usage for ELSA is in /etc/elsa_node.conf as "log_size_limit", calculated by sosetup when you enter the number of GB to use for ELSA logs. If I change that by manually editing that file, do I need to do anything to make it effective (eg: sudo service sphinxsearch restart)?

I *think* you can probably just restart syslog-ng. When in doubt, reboot! :)

> How about for the other sensor processes, like argus, netsniff-ng, sguil, etc.? Do the cleanup scripts check available vs. total ratio on each call, or are the sizes fixed at sosetup time?

The cleanup scripts simply use the CRIT_DISK_USAGE in
/etc/nsm/securityonion.conf which is expressed as a percent of the
total available disk space and is checked every time the cron job runs
(every minute). Please see /etc/cron.d/sensor-clean.

> I'll look into sending a pull request to the github wiki, say for https://github.com/Security-Onion-Solutions/security-onion/wiki/ExtendLV, once I have all the info and am able to test everything works.

Sounds good, thanks!


--
Doug Burks
Need Security Onion Training or Commercial Support?
http://securityonionsolutions.com
Reply all
Reply to author
Forward
0 new messages