Integrating Openstack neat with openstack Grizzly

720 views
Skip to first unread message

Kashyap Raiyani

unread,
Jan 28, 2014, 11:51:44 PM1/28/14
to opensta...@googlegroups.com
 Hi Anton and Everyone,

I Have installed Openstack Grizzly on ubuntu 12.04 and i have followed https://github.com/mseknibilel/OpenStack-Grizzly-Install-Guide/blob/OVS_MultiNode/OpenStack_Grizzly_Install_Guide.rst . In my experiment, i am having 1 Controller node and 2 Compute nodes. My goal is to perform Dynamic VM migration.

Now my Question is, it possible for me to integrate openstack neat with my experiment? and if yes than how should i proceed? (Here my main goal is to to perform dynamic VM migration.)

-Thank You,
Raiyani Kashyap (Teaching Assistant),
M.Tech - II (Computer Network )
DA-IICT (www.daiict.ac.in)

Anton Beloglazov

unread,
Jan 29, 2014, 3:41:00 AM1/29/14
to opensta...@googlegroups.com
Hi Raiyani,

Thanks for your interest in the project. I believe you should be able to use OpenStack Neat on your setup. Please try to follow the instructions given in the Installation section on the following page: https://github.com/beloglazov/openstack-neat

Best regards,
Anton


--
You received this message because you are subscribed to the Google Groups "OpenStack Neat" group.
To unsubscribe from this group and stop receiving emails from it, send an email to openstack-nea...@googlegroups.com.
Visit this group at http://groups.google.com/group/openstack-neat.
For more options, visit https://groups.google.com/groups/opt_out.

Kashyap Raiyani

unread,
Jan 29, 2014, 7:53:18 AM1/29/14
to opensta...@googlegroups.com
Thanx Anton,

I will try to implement openstack neat.

Regrades,
kashyap

Kashyap Raiyani

unread,
Jan 30, 2014, 1:38:36 AM1/30/14
to opensta...@googlegroups.com

Hi Anton,

As you asked me to follow https://github.com/beloglazov/openstack-neat, When i try to run ./all-start.py it is giving me following error.
1)ERROR:
Traceback (most recent call last):
  File "./compute-data-collector-start.py", line 17, in <module>
    from neat.config import *
  File "/root/mygrit/neat/config.py", line 19, in <module>
    from contracts import contract
ImportError: No module named contracts
/etc/init.d/openstack-neat-global-manager: 24: .: Can't open /etc/rc.d/init.d/functions
/etc/init.d/openstack-neat-db-cleaner: 22: .: Can't open /etc/rc.d/init.d/functions
Traceback (most recent call last):
  File "./compute-local-manager-start.py", line 17, in <module>
    from neat.config import *
  File "/root/mygrit/neat/config.py", line 19, in <module>
    from contracts import contract
ImportError: No module named contracts

2) Do i have to create database for neat or it will be created by itself when i run python setup.py install ?

-Thanking you
Kashyap

Anton Beloglazov

unread,
Feb 2, 2014, 2:25:35 AM2/2/14
to opensta...@googlegroups.com
Hi Kashyap,

1. You need to install dependencies as described here: https://github.com/beloglazov/openstack-neat/blob/master/setup/deps-centos.sh
2. You need to creates a database and configure credentials as described here: https://github.com/beloglazov/openstack-neat/tree/master/setup

Best regards,
Anton


Kashyap Raiyani

unread,
Feb 11, 2014, 1:52:26 AM2/11/14
to opensta...@googlegroups.com
Hi Anton,

Sorry for late reply. I have installed dependency as u asked but still i am getting following erros,

Errors:

root@nova2:~/mygrit# ./all-start.sh

Traceback (most recent call last):
  File "./compute-data-collector-start.py", line 17, in <module>
    from neat.config import *
  File "/root/mygrit/neat/config.py", line 19, in <module>
    from contracts import contract
  File "/usr/local/lib/python2.7/dist-packages/contracts/__init__.py", line 7, in <module>
    from . import syntax
  File "/usr/local/lib/python2.7/dist-packages/contracts/syntax.py", line 79, in <module>
    from .library import (EqualTo, Unary, Binary, composite_contract,
  File "/usr/local/lib/python2.7/dist-packages/contracts/library/__init__.py", line 28, in <module>
    from .array import (ShapeContract, Shape, Array, ArrayConstraint, DType,
  File "/usr/local/lib/python2.7/dist-packages/contracts/library/array.py", line 6, in <module>
    from .array_ops import (ArrayOR, ArrayAnd, DType, ArrayConstraint,
  File "/usr/local/lib/python2.7/dist-packages/contracts/library/array_ops.py", line 229, in <module>
    'np_float16': np.float16,  #  Half precision float: sign bit, 5 bits exponent, 10 bits mantissa
AttributeError: 'module' object has no attribute 'float16'

/etc/init.d/openstack-neat-global-manager: 24: .: Can't open /etc/rc.d/init.d/functions
/etc/init.d/openstack-neat-db-cleaner: 22: .: Can't open /etc/rc.d/init.d/functions
Traceback (most recent call last):
  File "./compute-local-manager-start.py", line 17, in <module>
    from neat.config import *
  File "/root/mygrit/neat/config.py", line 19, in <module>
    from contracts import contract
  File "/usr/local/lib/python2.7/dist-packages/contracts/__init__.py", line 7, in <module>
    from . import syntax
  File "/usr/local/lib/python2.7/dist-packages/contracts/syntax.py", line 79, in <module>
    from .library import (EqualTo, Unary, Binary, composite_contract,
  File "/usr/local/lib/python2.7/dist-packages/contracts/library/__init__.py", line 28, in <module>
    from .array import (ShapeContract, Shape, Array, ArrayConstraint, DType,
  File "/usr/local/lib/python2.7/dist-packages/contracts/library/array.py", line 6, in <module>
    from .array_ops import (ArrayOR, ArrayAnd, DType, ArrayConstraint,
  File "/usr/local/lib/python2.7/dist-packages/contracts/library/array_ops.py", line 229, in <module>
    'np_float16': np.float16,  #  Half precision float: sign bit, 5 bits exponent, 10 bits mantissa
AttributeError: 'module' object has no attribute 'float16'

Can you please help me to resolve things as i am not that much good with python.

Thanking You,
kashyap

Anton Beloglazov

unread,
Feb 11, 2014, 8:17:21 AM2/11/14
to opensta...@googlegroups.com
Hi Kashyap,

Have you installed the latest versions of PyContracts and numpy?

Best regards,
Anton
Message has been deleted

Kashyap Raiyani

unread,
Feb 11, 2014, 9:24:53 AM2/11/14
to opensta...@googlegroups.com
Hi Anton,

Ya.  I have used pip install --upgrade PyContracts and i was not able to install numpy and scipy so i have followed process mentioned at http://stackoverflow.com/questions/20093058/install-scipy-using-pip-in-virtualenv-on-ubuntu-12-04 .

Did i do correctly ?

Thanking You,
kashyap

Anton Beloglazov

unread,
Feb 11, 2014, 4:51:10 PM2/11/14
to opensta...@googlegroups.com
To test whether they are installed correctly, you can run 'python', and then 'import contracts', 'import numpy' and see if they fail or not.

Best regards,
Anton


--

Kashyap Raiyani

unread,
Feb 12, 2014, 1:47:49 AM2/12/14
to opensta...@googlegroups.com
Hi Anton,

Out of 3 systems on 2 systems import numpy and import contracts are not giving any errors and on 1 system import contracts is giving errors as follow,

 >>>import contracts

Traceback (most recent call last):
  File "<stdin>", line 1, in <module>

  File "/usr/local/lib/python2.7/dist-packages/contracts/__init__.py", line 7, in <module>
    from . import syntax
  File "/usr/local/lib/python2.7/dist-packages/contracts/syntax.py", line 79, in <module>
    from .library import (EqualTo, Unary, Binary, composite_contract,
  File "/usr/local/lib/python2.7/dist-packages/contracts/library/__init__.py", line 28, in <module>
    from .array import (ShapeContract, Shape, Array, ArrayConstraint, DType,
  File "/usr/local/lib/python2.7/dist-packages/contracts/library/array.py", line 6, in <module>
    from .array_ops import (ArrayOR, ArrayAnd, DType, ArrayConstraint,
  File "/usr/local/lib/python2.7/dist-packages/contracts/library/array_ops.py", line 229, in <module>
    'np_float16': np.float16,  #  Half precision float: sign bit, 5 bits exponent, 10 bits mantissa
AttributeError: 'module' object has no attribute 'float16'

2) and on other 2 systems when i try to run ./all-starts.sh it is giving me following errors,


/etc/init.d/openstack-neat-global-manager: 24: .: Can't open /etc/rc.d/init.d/functions
/etc/init.d/openstack-neat-db-cleaner: 22: .: Can't open /etc/rc.d/init.d/functions



Regrades,
kashyap


Anton Beloglazov

unread,
Feb 12, 2014, 3:35:54 AM2/12/14
to opensta...@googlegroups.com
The system has been tested on CentOS, that's why the start up scripts don't work on Ubuntu. When the project gets installed, on CentOS it creates 4 scripts /usr/bin/neat-*. Please check if they exist. If so, the services can be started by manually executing each of those scripts.

Best regards,
Anton


Kashyap Raiyani

unread,
Feb 12, 2014, 4:50:21 AM2/12/14
to opensta...@googlegroups.com
Hi Anton,

There are 4 scripts located at, /usr/local/bin/neat-* and one of the scripts is as follow,

root@controller:~# cat /usr/local/bin/neat-local-manager

#!/usr/bin/python
# EASY-INSTALL-ENTRY-SCRIPT: 'openstack-neat==0.1','console_scripts','neat-local-manager'
__requires__ = 'openstack-neat==0.1'
import sys
from pkg_resources import load_entry_point

if __name__ == '__main__':
    sys.exit(
        load_entry_point('openstack-neat==0.1', 'console_scripts', 'neat-local-manager')()
    )


1) Is this same as your scripts in /usr/bin/neat-local-manager ?

2) There are 4 scripts in /etc/init.d/openstack-neat-* but they all contain line . /etc/rc.d/init.d/function. So how can i start all services?


Thanking You,
kashyap

Anton Beloglazov

unread,
Feb 12, 2014, 5:29:20 PM2/12/14
to opensta...@googlegroups.com
Hi Kashyap,

I think you can just execute each of the /usr/local/bin/neat-* scripts to start up the services.

Best regards,
Anton


Kashyap Raiyani

unread,
Feb 13, 2014, 12:52:52 AM2/13/14
to opensta...@googlegroups.com
Hi Anton,

I tried to start services but while doing that i am facing following errors,

root@controller:/usr/local/bin# ./neat-global-manager  
/bin/sh: 1: ether-wake: not found
/bin/sh: 1: ether-wake: not found
Traceback (most recent call last):

 
File "./neat-global-manager", line 9, in <module>
    load_entry_point
('openstack-neat==0.1', 'console_scripts', 'neat-global-manager')()
 
File "/usr/local/lib/python2.7/dist-packages/openstack_neat-0.1-py2.7.egg/neat/globals/manager.py", line 169, in start
    state
['compute_hosts'])
 
File "<string>", line 2, in switch_hosts_on
 
File "/usr/local/lib/python2.7/dist-packages/contracts/main.py", line 296, in contracts_checker
    result
= function_(*args, **kwargs)
 
File "/usr/local/lib/python2.7/dist-packages/openstack_neat-0.1-py2.7.egg/neat/globals/manager.py", line 868, in switch_hosts_on
    db
.insert_host_states(dict((x, 1) for x in hosts))
 
File "<string>", line 2, in insert_host_states
 
File "/usr/local/lib/python2.7/dist-packages/contracts/main.py", line 296, in contracts_checker
    result
= function_(*args, **kwargs)
 
File "/usr/local/lib/python2.7/dist-packages/openstack_neat-0.1-py2.7.egg/neat/db.py", line 320, in insert_host_states
   
for k, v in hosts.items()]
KeyError: 'nova2'


And import numpy and import contracts are not giving any errors.

2) For other services  ./neat-data-collector , ./neat-db-cleaner and ./neawt-local-manager , it is not giving any errors but it is not starting also.
3) Do i have to start all 4 service on all the systems?

Thanking You,
kashyap


Anton Beloglazov

unread,
Feb 13, 2014, 3:41:13 AM2/13/14
to opensta...@googlegroups.com
Sorry, I didn't tell you the exact sequence. Here is the content of the all-start.sh script, which is supposed to be started on the controller host:

./compute-data-collector-start.py
service openstack-neat-global-manager start
service openstack-neat-db-cleaner start
sleep 2
./compute-local-manager-start.py

This basically means that you need to perform the following steps:

1. Start the data collectors on all the compute hosts.
2. Start the global manager on the controller.
3. Start the db cleaner on the controller.
4. Wait for a couple of seconds to let all the services properly initialize.
5. Start the local managers on all the compute hosts.

I hope that works.

Best regards,
Anton



Kashyap Raiyani

unread,
Feb 13, 2014, 7:17:05 AM2/13/14
to opensta...@googlegroups.com
Hi Anton,

1) I tried to start data collectors on compute host and i waited for 1 hour but still it didn't get start after giving keyboard interrupt (Ctrl + C), I got following Errors,

root@nova1
:/usr/local/bin# ./neat-data-collector
^CTraceback (most recent call last):
 
File "./neat-data-collector", line 9, in <module>
    load_entry_point
('openstack-neat==0.1', 'console_scripts', 'neat-data-collector')()
 
File "<string>", line 2, in start
 
File "/usr/local/lib/python2.7/dist-packages/contracts/main.py", line 296, in contracts_checker
    result
= function_(*args, **kwargs)
 
File "/usr/local/lib/python2.7/dist-packages/openstack_neat-0.1-py2.7.egg/neat/locals/collector.py", line 140, in start
   
int(interval))
 
File "<string>", line 2, in start
 
File "/usr/local/lib/python2.7/dist-packages/contracts/main.py", line 296, in contracts_checker
    result
= function_(*args, **kwargs)
 
File "/usr/local/lib/python2.7/dist-packages/openstack_neat-0.1-py2.7.egg/neat/common.py", line 63, in start
    time
.sleep(time_interval)
KeyboardInterrupt


2) On controller ./neat-global-manager is giving me same error as i mentioned in previous post.

I have tried to see those files but it is not making any sense to me. So i don't know what is causing this errors.

Thanking You,
kashyap
   

Anton Beloglazov

unread,
Feb 13, 2014, 10:51:10 PM2/13/14
to opensta...@googlegroups.com
You can also check the log files, which should be in /var/log/neat/, they should give more information about what went wrong.

Best regards,
Anton


--

Kashyap Raiyani

unread,
Feb 14, 2014, 10:08:32 AM2/14/14
to opensta...@googlegroups.com
Hi Anton,

I tried to do everything but i am not finding any luck. i am sharing my log files can you please look into them,

1) neat-data-collector.log
2014-02-14 10:57:27,728 INFO     neat.locals.collector Creaned up the local data directory: /var/lib/neat
2014-02-14 10:57:27,728 INFO     neat.locals.collector Starting the data collector, iterations every 300 seconds
2014-02-14 10:57:27,776 DEBUG    neat.db Instantiated a Database object
2014-02-14 10:57:27,776 DEBUG    neat.db_utils Initialized a DB connection to mysql://neat:neatpa...@10.100.64.24/neat
2014-02-14 10:57:27,829 INFO     neat.locals.collector Started an iteration
2014-02-14 10:57:27,830 INFO     neat.locals.collector Started VM data collection
2014-02-14 10:57:27,830 INFO     neat.locals.collector Completed VM data collection
2014-02-14 10:57:27,830 INFO     neat.locals.collector Started host data collection
2014-02-14 10:57:27,830 INFO     neat.locals.collector Completed host data collection
2014-02-14 10:57:27,831 INFO     neat.locals.collector Completed an iteration


2) neat-globle-manager.log
2014-02-14 11:01:24,806 DEBUG    neat.db Instantiated a Database object
2014-02-14 11:01:24,807 DEBUG    neat.db_utils Initialized a DB connection to mysql://neat:neatpa...@10.100.64.24/neat
2014-02-14 11:01:24,831 DEBUG    neat.globals.manager Calling: ether-wake -i eth0 78:2b:cb:92:20:ae
2014-02-14 11:01:24,861 DEBUG    neat.globals.manager Calling: ether-wake -i eth0 78:2b:cb:92:20:e0
2014-02-14 11:01:24,867 INFO     neat.globals.manager Switched on hosts: ['nova1', 'nova2']

./neat-globle-manager is giving me error which i have shared in previous post.

Thanking You,
kashyap

Anton Beloglazov

unread,
Feb 16, 2014, 2:29:09 AM2/16/14
to opensta...@googlegroups.com
These log files look ok, did you check logs from the other hosts?

Best regards,
Anton


Kashyap Raiyani

unread,
Feb 18, 2014, 1:42:47 AM2/18/14
to opensta...@googlegroups.com
Hi Anton,

Something went wrong so i am doing fresh installation but i am not the only one who was facing this problem even other user Albert vonpupp was having same problem (https://groups.google.com/forum/#!topic/openstack-neat/yZlsCo9814U) and he managed to solve this by telling that "Neat needs hosts without FQDN". So can you tell me what he meant by that.

Thanking You,
Kashyap

Kashyap Raiyani

unread,
Feb 20, 2014, 1:14:00 AM2/20/14
to opensta...@googlegroups.com

Anton Beloglazov

unread,
Feb 20, 2014, 3:56:54 AM2/20/14
to opensta...@googlegroups.com
Hi Kashyap,

Did you initilize the database? Looks like some OpenStack's tables don't exist. The dependency installation should not have caused this.

FQDN stands for Fully Qualified Domain Name (http://en.wikipedia.org/wiki/Fully_qualified_domain_name). So looks like the problem was in the way you specify domain names of the hosts. But I'm not exactly sure what's wrong with your configuration. Albert might be able to help you since he had the same problem. 

Best regards,
Anton


--

Kashyap Raiyani

unread,
Feb 25, 2014, 8:38:20 AM2/25/14
to opensta...@googlegroups.com
Hi Anton,

I am not able to solve that error plus after installing those dependency my openstack is not working. Anyway i will try to install openstack on CentOS as you also did your experiment on that. Did you follow http://docs.openstack.org/havana/install-guide/install/yum/content/ for your openstack installation?

Thanking You,
Kashyap

Anton Beloglazov

unread,
Feb 25, 2014, 4:39:00 PM2/25/14
to opensta...@googlegroups.com
Hi Kashyap,

I'm sorry you are having problems with installing the project. I followed the following steps to install CentOS https://github.com/beloglazov/openstack-centos-kvm-glusterfs

Best regards,
Anton


--

Kashyap Raiyani

unread,
Feb 25, 2014, 10:13:06 PM2/25/14
to opensta...@googlegroups.com
Hi Anton,

I went through that script of yours but my architecture is different then yours, i am not having gateway node i am having only 2 compute node and 1 controller node plus all are connected to each other through single NIC(eth0) and getting their IP's from DHCP server of my campus. In this case should i proceed same as mentioned?

Thanking You,
Kashyap 

Anton Beloglazov

unread,
Feb 26, 2014, 3:17:59 AM2/26/14
to opensta...@googlegroups.com
I think this shouldn't matter, Neat relies on host names for communication between components. As long as they are accessible over network, it should be fine. Just make sure that all the host names are configured correctly in /etc/hosts and /etc/neat/neat.conf on all the hosts.

Best regards,
Anton


--

Kashyap Raiyani

unread,
Feb 26, 2014, 4:38:05 AM2/26/14
to opensta...@googlegroups.com
Hi Anton,

But i am not having gateway node. So, on which system should i install 01-network-gateway and
09-openstack-gateway. In / lib/nova-config.sh at Set the network configuration what should be my network_host name?

can i have my controller as my gateway node?

Thanking You,
Kashyap

Anton Beloglazov

unread,
Feb 26, 2014, 4:41:43 AM2/26/14
to opensta...@googlegroups.com
What I meant is that you don't have to follow the guide that I linked, you can install OpenStack as you wish and then just configure the host names and Neat accordingly. Yes, you can have your controller as a gateway, but then you will have to modify some of the scripts from the guide.

Cheers,
Anton


--

Kashyap Raiyani

unread,
Feb 26, 2014, 5:16:37 AM2/26/14
to opensta...@googlegroups.com
Hi Anton,

My main question is, will gateway node be providing any VM's as it is compute node in your script? or it will act as Openstack-Network to communicate each-other?

Thanking You,
Kashyap

Kashyap Raiyani

unread,
Feb 27, 2014, 9:11:06 AM2/27/14
to opensta...@googlegroups.com
Hi Anton,

While following your script during the installation of KVM on compute node, i start getting following error,
Transaction Check Error:
  file
/usr/lib64/libgfrpc.so.0.0.0 from install of glusterfs-libs-3.4.0.57rhs-1.el6_5.x86_64 conflicts with file from package glusterfs-3.3.1-1.el6.x86_64
  file
/usr/lib64/libgfxdr.so.0.0.0 from install of glusterfs-libs-3.4.0.57rhs-1.el6_5.x86_64 conflicts with file from package glusterfs-3.3.1-1.el6.x86_64
  file
/usr/lib64/libglusterfs.so.0.0.0 from install of glusterfs-libs-3.4.0.57rhs-1.el6_5.x86_64 conflicts with file from package glusterfs-3.3.1-1.el6.x86_64


after that whatever i do, i get that error and i am not able to install any packages. When i try to update any package using yum i am getting some error related to glusterfs. Which is same as https://access.redhat.com/site/solutions/641013.

How do i resolve this glusterfs issue?

Thanking You,
Kashyap


Kashyap Raiyani

unread,
Feb 28, 2014, 2:53:42 AM2/28/14
to opensta...@googlegroups.com
Hi Anton,

I have resolved that glusterfs issues.


Regards,
Kashyap

Anton Beloglazov

unread,
Feb 28, 2014, 3:32:25 AM2/28/14
to opensta...@googlegroups.com
Awesome! How did you solve it?

Best regards,
Anton


--

Kashyap Raiyani

unread,
Mar 2, 2014, 2:16:46 AM3/2/14
to opensta...@googlegroups.com
Hi Anton,

I downloaded glusterfs repo from their official site and manually added to yum repo and it worked for me. 

While working with your script, I am getting some python errors when i was trying to create nova-network, plus on my controller I'm getting "XXX". I was facing this same problem after I tried to install those dependency on my openstrack (Ubuntu 12.04). You were getting any errors while implementing script?


Thanking You,
Kashyap

Anton Beloglazov

unread,
Mar 4, 2014, 5:41:38 AM3/4/14
to opensta...@googlegroups.com
Hi Kashyap,

I had problems while working on the implementation, but resolved them in the end. I can't really recommend you a solution without seeing error logs.

Best regards,
Anton


--

Kashyap Raiyani

unread,
Mar 18, 2014, 5:16:26 AM3/18/14
to opensta...@googlegroups.com
Hi Anton,

I am trying to install openstack on CentOS 6.3, while doing that i am face some error on compute node,

Compute node :
[root@compute compute]# nova-manage service list
2014-03-18 13:47:59.840 28178 WARNING nova.openstack.common.db.sqlalchemy.session [req-b1c4b21c-3317-4cdb-b641-bb0ed9a289cd None None] SQL connection failed. infinite attempts left.


On other hand controller is working fine,

[root@controller ~]# nova-manage service list
Binary           Host                                 Zone             Status     State Updated_At
nova-cert        controller                           internal         enabled    :-)   2014-03-18 08:17:37
nova-consoleauth controller                           internal         enabled    :-)   2014-03-18 08:17:37
nova-scheduler   controller                           internal         enabled    :-)   2014-03-18 08:17:46
[root@controller ~]#

I have checked neat.conf file and every thing seems fine to me.

What might be going wrong?

Thanking You,
Kashyap

Kashyap Raiyani

unread,
Mar 18, 2014, 4:14:30 PM3/18/14
to opensta...@googlegroups.com
Hi Anton,

I think, I am finally able to install openstack neat. I am sharing  log files of controller and compute can you please look at them and let me know that openstack neat is working fine?

Controller :

1) global-manager.log
2014-03-19 01:18:56,797 DEBUG    neat.db Instantiated a Database object
2014-03-19 01:18:56,797 DEBUG    neat.db_utils Initialized a DB connection to mysql://neat:neatpassword@controller/neat
2014-03-19 01:18:56,896 DEBUG    neat.globals.manager Calling: ether-wake -i eth0 00:0c:29:83:e3:5c
2014-03-19 01:18:56,902 INFO     neat.globals.manager Switched on hosts: ['compute']
2014-03-19 01:18:56,957 INFO     neat.globals.manager Starting the global manager listening to controller:60080


2) db-cleaner.log
2014-03-19 01:19:39,331 INFO     neat.globals.db_cleaner Starting the database cleaner, iterations every 7200 seconds
2014-03-19 01:19:39,360 DEBUG    neat.db Instantiated a Database object
2014-03-19 01:19:39,360 DEBUG    neat.db_utils Initialized a DB connection to mysql://neat:neatpassword@controller/neat
2014-03-19 01:19:39,392 INFO     neat.globals.db_cleaner Cleaned up data older than 2014-03-18 23:19:39


Compute :

1) data-collector.log
2014-03-19 06:36:44,916 INFO     neat.locals.collector Created a local VM data directory: /var/lib/neat/vms
2014-03-19 06:36:44,916 INFO     neat.locals.collector Starting the data collector, iterations every 300 seconds
2014-03-19 06:36:45,184 DEBUG    neat.db Instantiated a Database object
2014-03-19 06:36:45,184 DEBUG    neat.db_utils Initialized a DB connection to mysql://neat:neatpassword@controller/neat
2014-03-19 06:36:45,192 INFO     neat.db Created a new DB record for a host compute, id=1
2014-03-19 06:36:45,192 INFO     neat.locals.collector Started an iteration
2014-03-19 06:36:45,193 INFO     neat.locals.collector Started VM data collection
2014-03-19 06:36:45,193 INFO     neat.locals.collector Completed VM data collection
2014-03-19 06:36:45,193 INFO     neat.locals.collector Started host data collection
2014-03-19 06:36:45,193 INFO     neat.locals.collector Completed host data collection
2014-03-19 06:36:45,194 INFO     neat.locals.collector Completed an iteration
2014-03-19 06:42:32,490 INFO     neat.locals.collector Creaned up the local data directory: /var/lib/neat
2014-03-19 06:42:32,490 INFO     neat.locals.collector Starting the data collector, iterations every 300 seconds
2014-03-19 06:42:32,528 DEBUG    neat.db Instantiated a Database object
2014-03-19 06:42:32,528 DEBUG    neat.db_utils Initialized a DB connection to mysql://neat:neatpassword@controller/neat
2014-03-19 06:42:32,531 INFO     neat.locals.collector Started an iteration
2014-03-19 06:42:32,533 INFO     neat.locals.collector Started VM data collection
2014-03-19 06:42:32,533 INFO     neat.locals.collector Completed VM data collection
2014-03-19 06:42:32,533 INFO     neat.locals.collector Started host data collection
2014-03-19 06:42:32,533 INFO     neat.locals.collector Completed host data collection
2014-03-19 06:42:32,533 INFO     neat.locals.collector Completed an iteration
2014-03-19 06:45:11,033 INFO     neat.locals.collector Creaned up the local data directory: /var/lib/neat
2014-03-19 06:45:11,033 INFO     neat.locals.collector Starting the data collector, iterations every 300 seconds
2014-03-19 06:45:11,088 DEBUG    neat.db Instantiated a Database object
2014-03-19 06:45:11,088 DEBUG    neat.db_utils Initialized a DB connection to mysql://neat:neatpassword@controller/neat
2014-03-19 06:45:11,092 INFO     neat.locals.collector Started an iteration
2014-03-19 06:45:11,093 INFO     neat.locals.collector Started VM data collection
2014-03-19 06:45:11,093 INFO     neat.locals.collector Completed VM data collection
2014-03-19 06:45:11,093 INFO     neat.locals.collector Started host data collection
2014-03-19 06:45:11,094 INFO     neat.locals.collector Completed host data collection
2014-03-19 06:45:11,094 INFO     neat.locals.collector Completed an iteration
2014-03-19 06:50:11,195 INFO     neat.locals.collector Started an iteration
2014-03-19 06:50:11,197 INFO     neat.locals.collector Started VM data collection
2014-03-19 06:50:11,198 INFO     neat.locals.collector Completed VM data collection
2014-03-19 06:50:11,198 INFO     neat.locals.collector Started host data collection
2014-03-19 06:50:11,198 INFO     neat.locals.collector Completed host data collection
2014-03-19 06:50:11,209 DEBUG    neat.locals.collector Collected VM CPU MHz: {}
2014-03-19 06:50:11,209 DEBUG    neat.locals.collector Collected total VMs CPU MHz: 0
2014-03-19 06:50:11,210 DEBUG    neat.locals.collector Collected hypervisor CPU MHz: 81
2014-03-19 06:50:11,210 DEBUG    neat.locals.collector Collected host CPU MHz: 81
2014-03-19 06:50:11,210 DEBUG    neat.locals.collector Collected total CPU MHz: 81
2014-03-19 06:50:11,235 DEBUG    neat.locals.collector Overload state logged: False
2014-03-19 06:50:11,235 INFO     neat.locals.collector Completed an iteration
2014-03-19 06:55:11,281 INFO     neat.locals.collector Started an iteration
2014-03-19 06:55:11,284 INFO     neat.locals.collector Started VM data collection
2014-03-19 06:55:11,285 INFO     neat.locals.collector Completed VM data collection
2014-03-19 06:55:11,285 INFO     neat.locals.collector Started host data collection
2014-03-19 06:55:11,285 INFO     neat.locals.collector Completed host data collection
2014-03-19 06:55:11,296 DEBUG    neat.locals.collector Collected VM CPU MHz: {}
2014-03-19 06:55:11,296 DEBUG    neat.locals.collector Collected total VMs CPU MHz: 0
2014-03-19 06:55:11,296 DEBUG    neat.locals.collector Collected hypervisor CPU MHz: 139
2014-03-19 06:55:11,297 DEBUG    neat.locals.collector Collected host CPU MHz: 139
2014-03-19 06:55:11,297 DEBUG    neat.locals.collector Collected total CPU MHz: 139
2014-03-19 06:55:11,297 INFO     neat.locals.collector Completed an iteration


2) local-manager.log
2014-03-19 06:48:51,970 INFO     neat.locals.manager Starting the local manager, iterations every 300 seconds
2014-03-19 06:48:52,027 DEBUG    neat.db Instantiated a Database object
2014-03-19 06:48:52,027 DEBUG    neat.db_utils Initialized a DB connection to mysql://neat:neatpassword@controller/neat
2014-03-19 06:48:52,028 INFO     neat.locals.manager Started an iteration
2014-03-19 06:48:52,029 INFO     neat.locals.manager The host is idle
2014-03-19 06:48:52,029 INFO     neat.locals.manager Skipped an iteration
2014-03-19 06:53:52,129 INFO     neat.locals.manager Started an iteration
2014-03-19 06:53:52,130 INFO     neat.locals.manager The host is idle
2014-03-19 06:53:52,130 INFO     neat.locals.manager Skipped an iteration
2014-03-19 06:58:52,231 INFO     neat.locals.manager Started an iteration
2014-03-19 06:58:52,232 INFO     neat.locals.manager The host is idle
2014-03-19 06:58:52,232 INFO     neat.locals.manager Skipped an iteration



I think everything is working fine but wanted your conformation that openstack neat is properly integrated with openstack.

Thanking You,
Kashyap  

Anton Beloglazov

unread,
Mar 18, 2014, 5:53:35 PM3/18/14
to opensta...@googlegroups.com
Hi Kashyap,

From the logs it seems that everything works fine. You can monitor the current VM placement using the vm-placement.py script. Try to create several VMs and see if they get consolidated by OpenStack Neat.

How have you solved your problems? That could help other people if they face same issues.

Cheers,
Anton


--
You received this message because you are subscribed to the Google Groups "OpenStack Neat" group.
To unsubscribe from this group and stop receiving emails from it, send an email to openstack-nea...@googlegroups.com.
Visit this group at http://groups.google.com/group/openstack-neat.
For more options, visit https://groups.google.com/d/optout.

Kashyap Raiyani

unread,
Mar 20, 2014, 7:25:26 AM3/20/14
to opensta...@googlegroups.com
Hi Anton,

Previous installation of neat was on vmware. This time I installed openstack neat on 2 compute and on 1 controller. I was having no VM's running on compute host, then NEAT should suspend idle host (any of those compute host) within 5 mins but it didn't happen. I waited for 20 mins but nothing happened. What might have went wrong? 
and pm-suspend works manually.

Thanking You,
Kashyap

Anton Beloglazov

unread,
Mar 20, 2014, 8:45:53 AM3/20/14
to opensta...@googlegroups.com
Hi Kashyap,

Please check the log of the global manager to see whether it actually executes pm-suspend.

Best regards,
Anton


--

Kashyap Raiyani

unread,
Mar 20, 2014, 10:11:51 AM3/20/14
to opensta...@googlegroups.com
Hi Anton,

No global-manager didn't execute pm-suspend, log file as follow,

global.manager.log :

2014-03-20 16:20:00,786 DEBUG    neat.db Instantiated a Database object
2014-03-20 16:20:00,786 DEBUG    neat.db_utils Initialized a DB connection to mysql://neat:neatpassword@controller/neat
2014-03-20 16:20:00,837 DEBUG    neat.globals.manager Calling: ether-wake -i eth0 78:2b:cb:92:20:e0
2014-03-20 16:20:00,846 DEBUG    neat.globals.manager Calling: ether-wake -i eth0 78:2b:cb:92:20:ae
2014-03-20 16:20:00,849 INFO     neat.globals.manager Switched on hosts: ['compute1', 'compute2']
2014-03-20 16:20:00,909 INFO     neat.globals.manager Starting the global manager listening to controller:60080

I think that local manager decides that weather host is under-utilized or over-utilized that it tell global-manager about that host. but here it is not telling global-manager anything.

local-manager.log from compute1 :

2014-03-20 16:20:39,887 INFO     neat.locals.manager Starting the local manager, iterations every 300 seconds
2014-03-20 16:20:39,929 DEBUG    neat.db Instantiated a Database object
2014-03-20 16:20:39,929 DEBUG    neat.db_utils Initialized a DB connection to mysql://neat:neatpassword@controller/neat
2014-03-20 16:20:39,930 INFO     neat.locals.manager Started an iteration
2014-03-20 16:20:39,931 INFO     neat.locals.manager The host is idle
2014-03-20 16:20:39,931 INFO     neat.locals.manager Skipped an iteration
2014-03-20 16:25:40,010 INFO     neat.locals.manager Started an iteration
2014-03-20 16:25:40,010 INFO     neat.locals.manager The host is idle
2014-03-20 16:25:40,010 INFO     neat.locals.manager Skipped an iteration
2014-03-20 16:30:40,036 INFO     neat.locals.manager Started an iteration
2014-03-20 16:30:40,036 INFO     neat.locals.manager The host is idle
2014-03-20 16:30:40,036 INFO     neat.locals.manager Skipped an iteration
2014-03-20 16:35:40,136 INFO     neat.locals.manager Started an iteration
2014-03-20 16:35:40,137 INFO     neat.locals.manager The host is idle
2014-03-20 16:35:40,137 INFO     neat.locals.manager Skipped an iteration
2014-03-20 16:40:40,236 INFO     neat.locals.manager Started an iteration
2014-03-20 16:40:40,236 INFO     neat.locals.manager The host is idle
2014-03-20 16:40:40,236 INFO     neat.locals.manager Skipped an iteration
2014-03-20 16:45:40,306 INFO     neat.locals.manager Started an iteration
2014-03-20 16:45:40,307 INFO     neat.locals.manager The host is idle
2014-03-20 16:45:40,307 INFO     neat.locals.manager Skipped an iteration
2014-03-20 16:50:40,407 INFO     neat.locals.manager Started an iteration
2014-03-20 16:50:40,408 INFO     neat.locals.manager The host is idle
2014-03-20 16:50:40,408 INFO     neat.locals.manager Skipped an iteration
2014-03-20 16:55:40,491 INFO     neat.locals.manager Started an iteration
2014-03-20 16:55:40,492 INFO     neat.locals.manager The host is idle
2014-03-20 16:55:40,492 INFO     neat.locals.manager Skipped an iteration

Is it like that, global-manager do pm-suspend if and only if vm migration is done? and after migration it checks weather host is under-utilized if yes then suspend it.

-Thanking You,
Kashyap

Kashyap Raiyani

unread,
Mar 21, 2014, 2:16:52 PM3/21/14
to opensta...@googlegroups.com
Hi Anton,

I am not sure but I found out that what is preventing host from not getting suspend.

code from neat/local/manager.py file : 

if not vm_cpu_mhz:
        if log.isEnabledFor(logging.INFO):
            log.info('The host is idle')
        log.info('Skipped an iteration')
        return state

Every time local-manager is getting return and underload detection is not happening. Here we are assuming that all compute host are not running any VM. Correct me if I am wrong.

Thanking You,
Kashyap 

Anton Beloglazov

unread,
Mar 27, 2014, 5:59:48 AM3/27/14
to opensta...@googlegroups.com
Hi Kashyap,

You are right, that block of code prevents the hosts from being switched to the sleep mode when they are initially idle. The current logic is that the local manager waits to receive at least one VM after the system start up before switching the host off. The reason is that currently the VM instantiation is handled by the core OpenStack services, which are not aware of OpenStack Neat. Therefore, without this block of code, if the cluster has no VMs running at the moment when OpenStack Neat services start, all the servers would be shut down immediately. Then, the core OpenStack services would not be able to instantiate a VM. 

I agree that this is a flaw in the current implementation: basically it requires at least one host to be on at any time to allow new VMs to be instantiated. One way to address this would be to always keep one or more extra idle host on all the time. This could be implemented by letting the global manager ignore underload requests of at leash one host all the time. Unfortunately, I don't have a deployment of OpenStack Neat to test such code. However, it shouldn't be too hard to implement, all that needs to be done is the modification of the execute_underload function of the global manager, where it would need to skip requests of one of the hosts. Then, the block of code that you found can be removed. If you could make that change and test it, I would be happy to merge it to the main repository.

Best regards,
Anton


Kashyap Raiyani

unread,
Mar 27, 2014, 9:06:07 AM3/27/14
to opensta...@googlegroups.com
Hi Anton,

I will surely do that but currently I am busy with my thesis. I found out that Openstack Grizzly is having #BUG which prevents live migration in both block migration and shared storage migration. I tried to do both Migration Technique but it didn't work. Then tried to do with stable/havana but same thing happened. So my question is,

1. Will Openstack Neat able to migrate VM?
2. Do openstack Neat change the VM placement Policy of openstack Installation (any version)?

Thanking You,
Kashyap


Kashyap Raiyani

unread,
Mar 28, 2014, 5:18:57 AM3/28/14
to opensta...@googlegroups.com
Hi Anton,

I was testing that does VM migrate or not, In this 1 of my compute is idle and other is running 1 VM. Following shows log of compute host (running VM),

local-manager.log 

2014-03-28 13:32:54,087 INFO     neat.locals.manager Started an iteration
2014-03-28 13:32:54,090 DEBUG    neat.locals.manager The total physical CPU Mhz: 6400
2014-03-28 13:32:54,090 DEBUG    neat.locals.manager VM CPU MHz: {'1500cdb7-ed1d-4c3b-8cae-2a17e481d6a5': []}
2014-03-28 13:32:54,090 DEBUG    neat.locals.manager Host CPU MHz: [45, 50, 48, 51, 57, 52, 50, 55, 58]
2014-03-28 13:32:54,090 DEBUG    neat.locals.manager CPU utilization: []
2014-03-28 13:32:54,090 INFO     neat.locals.manager Not enough data yet - skipping to the next iteration
2014-03-28 13:32:54,090 INFO     neat.locals.manager Skipped an iteration

Terminal error of local-manager

root@compute2:/usr/local/bin# ./neat-local-manager 

libvir: QEMU error : Domain not found: no domain with matching uuid '1500cdb7-ed1d-4c3b-8cae-2a17e481d6a5'
libvir: QEMU error : Domain not found: no domain with matching uuid '1500cdb7-ed1d-4c3b-8cae-2a17e481d6a5'
libvir: QEMU error : Domain not found: no domain with matching uuid '1500cdb7-ed1d-4c3b-8cae-2a17e481d6a5'
libvir: QEMU error : Domain not found: no domain with matching uuid '1500cdb7-ed1d-4c3b-8cae-2a17e481d6a5'
libvir: QEMU error : Domain not found: no domain with matching uuid '1500cdb7-ed1d-4c3b-8cae-2a17e481d6a5'
libvir: QEMU error : Domain not found: no domain with matching uuid '1500cdb7-ed1d-4c3b-8cae-2a17e481d6a5'
libvir: QEMU error : Domain not found: no domain with matching uuid '1500cdb7-ed1d-4c3b-8cae-2a17e481d6a5'

data-collector.log

2014-03-28 14:38:08,584 INFO     neat.locals.collector Started an iteration
2014-03-28 14:38:08,587 DEBUG    neat.locals.collector Added VMs: ['01a9d4f9-f90b-4209-a995-4256ccac7dd1']
2014-03-28 14:38:08,589 DEBUG    neat.locals.collector Fetched remote data: {'01a9d4f9-f90b-4209-a995-4256ccac7dd1': []}
2014-03-28 14:38:08,590 INFO     neat.locals.collector Started VM data collection

After this in terminal data-collector process get terminated with following error,

data-collector-terminal-error

root@compute2:/usr/local/bin# ./neat-data-collector 
Traceback (most recent call last):
  File "./neat-data-collector", line 9, in <module>
    load_entry_point('openstack-neat==0.1', 'console_scripts', 'neat-data-collector')()
  File "<string>", line 2, in start
  File "/usr/local/lib/python2.7/dist-packages/PyContracts-1.6.0-py2.7.egg/contracts/main.py", line 296, in contracts_checker
    result = function_(*args, **kwargs)
  File "/usr/local/lib/python2.7/dist-packages/openstack_neat-0.1-py2.7.egg/neat/locals/collector.py", line 140, in start
    int(interval))
  File "<string>", line 2, in start
  File "/usr/local/lib/python2.7/dist-packages/PyContracts-1.6.0-py2.7.egg/contracts/main.py", line 296, in contracts_checker
    result = function_(*args, **kwargs)
  File "/usr/local/lib/python2.7/dist-packages/openstack_neat-0.1-py2.7.egg/neat/common.py", line 62, in start
    state = execute(config, state)
  File "/usr/local/lib/python2.7/dist-packages/openstack_neat-0.1-py2.7.egg/neat/locals/collector.py", line 271, in execute
    added_vm_data)
  File "<string>", line 2, in get_cpu_mhz
  File "/usr/local/lib/python2.7/dist-packages/PyContracts-1.6.0-py2.7.egg/contracts/main.py", line 296, in contracts_checker
    result = function_(*args, **kwargs)
  File "/usr/local/lib/python2.7/dist-packages/openstack_neat-0.1-py2.7.egg/neat/locals/collector.py", line 623, in get_cpu_mhz
    previous_cpu_time[uuid] = get_cpu_time(vir_connection, uuid)
  File "<string>", line 2, in get_cpu_time
  File "/usr/local/lib/python2.7/dist-packages/PyContracts-1.6.0-py2.7.egg/contracts/main.py", line 296, in contracts_checker
    result = function_(*args, **kwargs)
  File "/usr/local/lib/python2.7/dist-packages/openstack_neat-0.1-py2.7.egg/neat/locals/collector.py", line 643, in get_cpu_time
    return int(domain.getCPUStats(True, 0)[0]['cpu_time'])
AttributeError: virDomain instance has no attribute 'getCPUStats'

why is it happening like this? why is it not able to get VM CPU? 
It always gives PyContracts error but I have installed properly otherwise other services won't run.

Thanking You,
Kashyap

Anton Beloglazov

unread,
Mar 30, 2014, 8:40:42 PM3/30/14
to opensta...@googlegroups.com
Hi Kashyap,

1. Neat relies on OpenStack Nova's API for VM migration. As long as it's available, migrations should work.
2. The default VM allocation policy is not altered, Neat only reallocates VMs once they are already placed by OpenStack.

Best regards,
Anton


Kashyap Raiyani

unread,
Mar 31, 2014, 2:43:43 AM3/31/14
to opensta...@googlegroups.com
Hi Anton,

Can you please tell me that why my openstack neat installation is not able to perform VM migration? I have shared my error log files in my previous post.

Thanking You,
Kashyap 

Anton Beloglazov

unread,
Apr 5, 2014, 12:56:58 AM4/5/14
to opensta...@googlegroups.com
Hi Kashyap,

Sorry again for taking so long to reply, I had a really busy week. That is a really strange error, basically it means that libvirt managed to find the instance by UUID using the lookupByUUIDString (otherwise it should have raised an exception). But the returned object of the virDomain class does not have the getCPUStats method. The problem might be with the libvirt library or its Python bindings. What version of libvirt are you using? Is it newer than 0.9.11? That is the version when virDomainGetCPUStats has been added to the Python API.

Best regards,
Anton


--

Kashyap Raiyani

unread,
Apr 5, 2014, 2:05:39 AM4/5/14
to opensta...@googlegroups.com
Hi Anton,

I am using libvirt version 1.0.2

root@compute1:~# libvirtd --version
libvirtd (libvirt) 1.0.2

How can I resolve this error? what packages should I add/remove so that the binding problem of python libraries can be removed?

Thanking You,
Kashyap 

Kashyap Raiyani

unread,
Apr 7, 2014, 4:07:24 AM4/7/14
to opensta...@googlegroups.com
Hi Anton,

I don't know how but getCPUStats started working and data-collector was able to get VM CPU. local-manager was able to detect underload but when it tried to make connection request to global manager it got Connection refused error.

local-manager log:

2014-04-07 13:13:50,901 INFO     neat.locals.manager Starting the local manager, iterations every 300 seconds
2014-04-07 13:13:50,945 DEBUG    neat.db Instantiated a Database object
2014-04-07 13:13:50,945 DEBUG    neat.db_utils Initialized a DB connection to mysql://neat:neatpassword@controller/neat
2014-04-07 13:13:50,946 INFO     neat.locals.manager Started an iteration
2014-04-07 13:13:50,948 DEBUG    neat.locals.manager The total physical CPU Mhz: 6400
2014-04-07 13:13:50,948 DEBUG    neat.locals.manager VM CPU MHz: {'74bb28c5-7d4e-490f-b95e-36762aa995a0': [178, 190]}
2014-04-07 13:13:50,948 DEBUG    neat.locals.manager Host CPU MHz: [775, 405, 0]
2014-04-07 13:13:50,948 DEBUG    neat.locals.manager CPU utilization: [0.09109375, 0.0296875]
2014-04-07 13:13:51,047 INFO     neat.locals.manager Started underload detection
2014-04-07 13:13:51,047 INFO     neat.locals.manager Completed underload detection
2014-04-07 13:13:51,048 INFO     neat.locals.manager Started overload detection
2014-04-07 13:13:51,092 DEBUG    neat.locals.overload.mhod.core MHOD utilization:[0.09109375, 0.0296875]
2014-04-07 13:13:51,092 DEBUG    neat.locals.overload.mhod.core MHOD time_in_states:1
2014-04-07 13:13:51,092 DEBUG    neat.locals.overload.mhod.core MHOD time_in_state_n:0
2014-04-07 13:13:51,092 DEBUG    neat.locals.overload.mhod.core MHOD p:[[0.06666666666666667, 0.0], [0.0, 0.0]]
2014-04-07 13:13:51,092 DEBUG    neat.locals.overload.mhod.core MHOD current_state:0
2014-04-07 13:13:51,092 DEBUG    neat.locals.overload.mhod.core MHOD p[current_state]:[0.06666666666666667, 0.0]
2014-04-07 13:13:51,093 INFO     neat.locals.manager Completed overload detection
2014-04-07 13:13:51,093 INFO     neat.locals.manager Underload detected
2014-04-07 13:13:51,128 INFO     urllib3.connectionpool Starting new HTTP connection (1): controller
2014-04-07 13:13:51,128 ERROR    neat.locals.manager Exception at underload request:

Traceback (most recent call last):

 
File "/usr/local/lib/python2.7/dist-packages/openstack_neat-0.1-py2.7.egg/neat/locals/manager.py", line 305, in execute
   
'reason': 0})
 
File "/usr/lib/python2.7/dist-packages/requests/api.py", line 98, in put
   
return request('put', url, data=data, **kwargs)
 
File "/usr/lib/python2.7/dist-packages/requests/api.py", line 44, in request
   
return session.request(method=method, url=url, **kwargs)
 
File "/usr/lib/python2.7/dist-packages/requests/sessions.py", line 279, in request
    resp
= self.send(prep, stream=stream, timeout=timeout, verify=verify, cert=cert, proxies=proxies)
 
File "/usr/lib/python2.7/dist-packages/requests/sessions.py", line 374, in send
    r
= adapter.send(request, **kwargs)
 
File "/usr/lib/python2.7/dist-packages/requests/adapters.py", line 206, in send
   
raise ConnectionError(sockerr)
ConnectionError: [Errno 111] Connection refused
2014-04-07 13:13:51,156 INFO     neat.locals.manager Completed an iteration

second iteration of local-manager :

2014-04-07 13:19:02,126 INFO     neat.locals.manager Started an iteration
2014-04-07 13:19:02,128 DEBUG    neat.locals.manager The total physical CPU Mhz: 12400
2014-04-07 13:19:02,128 DEBUG    neat.locals.manager VM CPU MHz: {'74bb28c5-7d4e-490f-b95e-36762aa995a0': [178, 190, 189]}
2014-04-07 13:19:02,128 DEBUG    neat.locals.manager Host CPU MHz: [775, 405, 0, 0]
2014-04-07 13:19:02,128 DEBUG    neat.locals.manager CPU utilization: [0.04701612903225806, 0.01532258064516129, 0.015241935483870967]
2014-04-07 13:19:02,192 INFO     neat.locals.manager Started underload detection
2014-04-07 13:19:02,192 INFO     neat.locals.manager Completed underload detection
2014-04-07 13:19:02,192 INFO     neat.locals.manager Started overload detection
2014-04-07 13:19:02,277 DEBUG    neat.locals.overload.mhod.core MHOD utilization:[0.04701612903225806, 0.01532258064516129, 0.015241935483870967]
2014-04-07 13:19:02,277 DEBUG    neat.locals.overload.mhod.core MHOD time_in_states:1
2014-04-07 13:19:02,278 DEBUG    neat.locals.overload.mhod.core MHOD time_in_state_n:0
2014-04-07 13:19:02,278 DEBUG    neat.locals.overload.mhod.core MHOD p:[[0.1, 0.0], [0.0, 0.0]]
2014-04-07 13:19:02,278 DEBUG    neat.locals.overload.mhod.core MHOD current_state:0
2014-04-07 13:19:02,278 DEBUG    neat.locals.overload.mhod.core MHOD p[current_state]:[0.1, 0.0]
2014-04-07 13:19:02,278 INFO     neat.locals.manager Completed overload detection
2014-04-07 13:19:02,278 INFO     neat.locals.manager Underload detected
2014-04-07 13:19:02,287 INFO     requests.packages.urllib3.connectionpool Starting new HTTP connection (1): controller
2014-04-07 13:19:02,288 ERROR    neat.locals.manager Exception at underload request:

Traceback (most recent call last):

 
File "/usr/local/lib/python2.7/dist-packages/openstack_neat-0.1-py2.7.egg/neat/locals/manager.py", line 305, in execute
   
'reason': 0})
 
File "/usr/local/lib/python2.7/dist-packages/requests/api.py", line 99, in put
   
return request('put', url, data=data, **kwargs)
 
File "/usr/local/lib/python2.7/dist-packages/requests/api.py", line 44, in request
   
return session.request(method=method, url=url, **kwargs)
 
File "/usr/local/lib/python2.7/dist-packages/requests/sessions.py", line 383, in request
    resp
= self.send(prep, **send_kwargs)
 
File "/usr/local/lib/python2.7/dist-packages/requests/sessions.py", line 486, in send
    r
= adapter.send(request, **kwargs)
 
File "/usr/local/lib/python2.7/dist-packages/requests/adapters.py", line 378, in send
   
raise ConnectionError(e)
ConnectionError: HTTPConnectionPool(host='controller', port=60080): Max retries exceeded with url: / (Caused by <class 'socket.error'>: [Errno 111] Connection refused)
2014-04-07 13:19:02,288 INFO     neat.locals.manager Completed an iteration

 I tried to reinstall requests packages also but got same thing over and over. Why that 111 error is coming?

Thanking You,
Kashyap

Anton Beloglazov

unread,
Apr 7, 2014, 4:37:49 AM4/7/14
to opensta...@googlegroups.com

Hi Kashyap,

The connection might be blocked by the firewall. Try to run "sudo service iptables stop" on all the hosts.

Cheers,
Anton

Kashyap Raiyani

unread,
Apr 7, 2014, 5:24:58 AM4/7/14
to opensta...@googlegroups.com
Hi Anton,

Still I am getting same error. Firewall is disable.

root@compute1:~#ufw status
Status:inactive

Why it is getting refused?

-Kashyap

Anton Beloglazov

unread,
Apr 7, 2014, 5:43:59 AM4/7/14
to opensta...@googlegroups.com
Is your global manager's host name is controller? Try to run `telnet controller 60080` from the compute host to check if the global manager accepts connections. You can also try `telnet localhost 60080` on the controller.

Cheers,
Anton


--

Kashyap Raiyani

unread,
Apr 7, 2014, 7:45:37 AM4/7/14
to opensta...@googlegroups.com
Hi Anton,

Yes global manager's host name is controller. I am able to do 'telnet controller' on compute host but when I try to do 'telnet controller 60080', it is giving connection refuse error. The global manager service is listening on port 60080 on controller host. Further when I am trying to do 'telnet locahost 60080' on controller, it is giving connection refuse error.



root@controller:~# lsof  | grep 60080
neat-glob 4997            root    6u     IPv4              25833      0t0        TCP controller:60080 (LISTEN)

root@compute1:/usr/local/bin# telnet controller 60080
Trying 10.100.64.24...
telnet: Unable to connect to remote host: Connection refused

simple telnet "name/ip" is working.

Regards,
Kashyap

Anton Beloglazov

unread,
Apr 7, 2014, 7:55:20 AM4/7/14
to opensta...@googlegroups.com
By default, telnet uses port 992, which is accessible in your case, that's why it can connect. For some reason port 60080 is not accessible. Some discussion on potential causes is here: http://stackoverflow.com/questions/2333400/what-can-be-the-reasons-of-connection-refused-errors (found by googling "telnet connection refused"). You need to find out why the port cannot be accessed.

Best regards,
Anton


Kashyap Raiyani

unread,
Apr 8, 2014, 2:53:21 AM4/8/14
to opensta...@googlegroups.com
Hi Anton,

telnet is using port 23 as default port and that is the reason I am able to do 'telnet localhost'. I changed that default port 23 to port 60080. So now I am able to do 'telnet localhost 60080'. But now I am not able to start global manager service because port 60080 is already in use and it is throwing socket error telling address in-use(by telnet). If i change telnet port to 23 then global manager service is running fine.

So only 1 service can run on port 60080 either telnet or global manager. Is that put request from local manager using telnet? Why port 60080 is not listing to the request from local manager even though local manager is running at port 60080. why telnet is needed?

Regards,
Kashyap

Anton Beloglazov

unread,
Apr 8, 2014, 3:04:38 AM4/8/14
to opensta...@googlegroups.com
Sorry, you are right, the default port is 23. However, you don't need to change the port of the telnet server to connect to another port with a client. As I understand, the telnet client can connect to any port, where any server is listening at. Usually this feature is used just to test if a server is actually listening on that port, Neat is not using telnet internally. When telnet is running on 60080, are you able to connect from the compute host to it with `telnet controller 60080`?

Best regards,
Anton


Kashyap Raiyani

unread,
Apr 8, 2014, 3:11:30 AM4/8/14
to opensta...@googlegroups.com
Hi,

Yes. When telnet is running on port 60080, I am able to connect from the compute host to controller  by doing `telnet controller 60080`. And i am also able to do 'telnet localhost 60080' on controller. But then global manager service wont run.

-Regards,
Kashyap

Anton Beloglazov

unread,
Apr 8, 2014, 3:13:18 AM4/8/14
to opensta...@googlegroups.com
This means the global manager service does not accept connections for some reason. Can you show me its log after some connections are attempted?

Best regards,
Anton


--

Kashyap Raiyani

unread,
Apr 8, 2014, 3:29:23 AM4/8/14
to opensta...@googlegroups.com
Hi,

On controller side:

global manager log :

2014-04-08 12:00:31,905 DEBUG    neat.db Instantiated a Database object
2014-04-08 12:00:31,905 DEBUG    neat.db_utils Initialized a DB connection to mysql://neat:neatpassword@controller/neat
2014-04-08 12:00:31,912 DEBUG    neat.globals.manager Calling: ether-wake -i eth0 78:2b:cb:92:20:e0
2014-04-08 12:00:31,921 DEBUG    neat.globals.manager Calling: ether-wake -i eth0 78:2b:cb:92:20:ae
2014-04-08 12:00:31,923 INFO     neat.globals.manager Switched on hosts: ['compute1', 'compute2']
2014-04-08 12:00:31,959 INFO     neat.globals.manager Starting the global manager listening to controller:60080

on compute side node:

local manager log :


2014-04-07 13:13:51,093 INFO     neat.locals.manager Underload detected
2014-04-07 13:13:51,128 INFO     urllib3.connectionpool Starting new HTTP connection (1): controller
2014-04-07 13:13:51,128 ERROR    neat.locals.manager Exception at underload request:
Traceback (most recent call last):
 
File "/usr/local/lib/python2.7/dist-packages/openstack_neat-0.1-py2.7.egg/neat/locals/manager.py", line 305, in execute
   
'reason': 0})
 
File "/usr/lib/python2.7/dist-packages/requests/api.py", line 98, in put
   
return request('put', url, data=data, **kwargs)
 
File "/usr/lib/python2.7/dist-packages/requests/api.py", line 44, in request
   
return session.request(method=method, url=url, **kwargs)
 
File "/usr/lib/python2.7/dist-packages/requests/sessions.py", line 279, in request
    resp
= self.send(prep, stream=stream, timeout=timeout, verify=verify, cert=cert, proxies=proxies)
 
File "/usr/lib/python2.7/dist-packages/requests/sessions.py", line 374, in send
    r
= adapter.send(request, **kwargs)
 
File "/usr/lib/python2.7/dist-packages/requests/adapters.py", line 206, in send
   
raise ConnectionError(sockerr)
ConnectionError: [Errno 111] Connection refused
2014-04-07 13:13:51,156 INFO     neat.locals.manager Completed an iteration


2014-04-07 13:19:02,278 INFO     neat.locals.manager Underload detected
2014-04-07 13:19:02,287 INFO     requests.packages.urllib3.connectionpool Starting new HTTP connection (1): controller
2014-04-07 13:19:02,288 ERROR    neat.locals.manager Exception at underload request:
Traceback (most recent call last):
 
File "/usr/local/lib/python2.7/dist-packages/openstack_neat-0.1-py2.7.egg/neat/locals/manager.py", line 305, in execute
   
'reason': 0})
 
File "/usr/local/lib/python2.7/dist-packages/requests/api.py", line 99, in put
   
return request('put', url, data=data, **kwargs)
 
File "/usr/local/lib/python2.7/dist-packages/requests/api.py", line 44, in request
   
return session.request(method=method, url=url, **kwargs)
 
File "/usr/local/lib/python2.7/dist-packages/requests/sessions.py", line 383, in request
    resp
= self.send(prep, **send_kwargs)
 
File "/usr/local/lib/python2.7/dist-packages/requests/sessions.py", line 486, in send
    r
= adapter.send(request, **kwargs)
 
File "/usr/local/lib/python2.7/dist-packages/requests/adapters.py", line 378, in send
   
raise ConnectionError(e)
ConnectionError: HTTPConnectionPool(host='controller', port=60080): Max retries exceeded with url: / (Caused by <class'socket.error'>: [Errno 111] Connection refused)
2014-04-07 13:19:02,288 INFO     neat.locals.manager Completed an iteration


should i change global manger port to 8080 instead of 60080?
further when i try to do 'http://controller:60080/' from controller browser then i get "Method not allowed: the request has been made with a method other than the only supported PUT" message, which means that global manager is running and in terminal i am getting: 
localhost - - [08/Apr/2014 12:55:38] "GET / HTTP/1.1" 405 92
localhost - - [08/Apr/2014 12:55:38] "GET /favicon.ico HTTP/1.1" 404 744

 which also tells that global manager is running.

-Regards 
kashyap

Anton Beloglazov

unread,
Apr 8, 2014, 4:02:53 AM4/8/14
to opensta...@googlegroups.com
The last error is reasonable, since the global manager only accepts PUT requests, but your browser makes a GET request. This however does confirm, that the global manager has received that request. BTW, can you access http://controller:60080/ from your compute host?

The log of the global manager finishes at 12:00, but the error in the local manager's log is at 13:19. Do you have anything in the global manager's log at that time? 

Cheers,
Anton


--

Kashyap Raiyani

unread,
Apr 8, 2014, 4:15:08 AM4/8/14
to opensta...@googlegroups.com
Hi,

No I am not able to do http://controller:60080/ from compute host.

Those logs are from yesterday. But error is same only. There is no further entry in global manager log. today's log files:

global manager.log :
2014-04-08 13:12:42,437 DEBUG    neat.db Instantiated a Database object
2014-04-08 13:12:42,437 DEBUG    neat.db_utils Initialized a DB connection to mysql://neat:neatpassword@controller/neat
2014-04-08 13:12:42,443 DEBUG    neat.globals.manager Calling: ether-wake -i eth0 78:2b:cb:92:20:e0
2014-04-08 13:12:42,451 DEBUG    neat.globals.manager Calling: ether-wake -i eth0 78:2b:cb:92:20:ae
2014-04-08 13:12:42,453 INFO     neat.globals.manager Switched on hosts: ['compute1', 'compute2']
2014-04-08 13:12:42,494 INFO     neat.globals.manager Starting the global manager listening to controller:60080

there are no further entries are made. global-manager.log file remains same.

local manager.log:
2014-04-08 13:29:14,984 INFO     neat.locals.manager Underload detected
2014-04-08 13:29:15,005 INFO     requests.packages.urllib3.connectionpool Starting new HTTP connection (1): controller
2014-04-08 13:29:15,006 ERROR    neat.locals.manager Exception at underload request:

Traceback (most recent call last):
  File "/usr/local/lib/python2.7/dist-packages/openstack_neat-0.1-py2.7.egg/neat/locals/manager.py", line 305, in execute
    'reason': 0})
  File "/usr/local/lib/python2.7/dist-packages/requests/api.py", line 99, in put
    return request('put', url, data=data, **kwargs)
  File "/usr/local/lib/python2.7/dist-packages/requests/api.py", line 44, in request
    return session.request(method=method, url=url, **kwargs)
  File "/usr/local/lib/python2.7/dist-packages/requests/sessions.py", line 383, in request
    resp = self.send(prep, **send_kwargs)
  File "/usr/local/lib/python2.7/dist-packages/requests/sessions.py", line 486, in send
    r = adapter.send(request, **kwargs)
  File "/usr/local/lib/python2.7/dist-packages/requests/adapters.py", line 378, in send
    raise ConnectionError(e)
ConnectionError: HTTPConnectionPool(host='controller', port=60080): Max retries exceeded with url: / (Caused by <class 'socket.error'>: [Errno 111] Connection refused)
2014-04-08 13:29:15,035 INFO     neat.locals.manager Completed an iteration

and at every 300 second(1 cycle) same thing will be entered into log file. 

-Regards,
Kashyap

Anton Beloglazov

unread,
Apr 8, 2014, 5:24:56 AM4/8/14
to opensta...@googlegroups.com
Since you can access http://controller:60080/ from the controller but not from compute, I still think it's some kind of network configuration issue and not a problem of the global manager service. The fact that the global manager's log doesn't have anything about the local manager's requests confirms that. I'm not really sure why this happens. Does the `controller` host name refer to the same IP address on both the controller and compute hosts?

Cheers,
Anton


--

Kashyap Raiyani

unread,
Apr 8, 2014, 5:42:08 AM4/8/14
to opensta...@googlegroups.com
Hi,

Even I am not able to find the problem.
root@controller:~# cat /etc/hosts
127.0.0.1       localhost
127.0.1.1       controller
10.100.64.24    controller
10.100.64.28    compute1
10.100.64.29    compute2


root@compute1:~# cat /etc/hosts
127.0.0.1       localhost
127.0.1.1       compute1
10.100.64.24    controller
10.100.64.28    compute1
10.100.64.29    compute2


name is not issue because global manager is able to do ether-wake and able to switch on both compute node.

-Regards
Kashyap

Kashyap Raiyani

unread,
Apr 8, 2014, 7:05:14 AM4/8/14
to opensta...@googlegroups.com
Hi Anton,

I found the problem. Indeed it was networking problem. address 127.0.1.1 was causing error. Now I will try with migration and will let you know with further errors.

Thanks Anton

-Regards,
Kashyap 

Anton Beloglazov

unread,
Apr 8, 2014, 7:44:35 AM4/8/14
to opensta...@googlegroups.com

Awesome! How did you solve it?

Cheers,
Anton

--

Kashyap Raiyani

unread,
Apr 9, 2014, 4:06:26 AM4/9/14
to opensta...@googlegroups.com
Hi Anton,

Global manager was running on loop-back address that's why it was not visible on network and because of that I was able to do telnet controller(127.0.1.1) 60080. Then I changed that address to 10.100.64.24 and it worked fine(i.e telnet controller(10.100.64.24) 60080).

-Regards
Kashyap

Anton Beloglazov

unread,
Apr 9, 2014, 4:07:36 AM4/9/14
to opensta...@googlegroups.com
Great! I'm glad the problem is resolved.

Best regards,
Anton


--

Kashyap Raiyani

unread,
Apr 9, 2014, 4:32:22 AM4/9/14
to opensta...@googlegroups.com
Hi Anton,

I performed 2 experiments. In both experiments compute2's local-manager is not running.
In 1st experiment: There is 1 vm on compute1  and compute2 is idle. On start of local-manager it requested for migration and global-manager didn't do migration and switched off (pm-suspend) compute2. (i.e everything went as expected) 
log are as follow:
local-manager(compute1) :
2014-04-09 13:00:56,802 INFO     neat.locals.manager Underload detected
2014-04-09 13:00:56,811 INFO     requests.packages.urllib3.connectionpool Starting new HTTP connection (1): controller
2014-04-09 13:04:09,147 DEBUG    requests.packages.urllib3.connectionpool "PUT / HTTP/1.1" 200 0
2014-04-09 13:04:09,148 INFO     neat.locals.manager Received response: [200]
2014-04-09 13:04:09,148 INFO     neat.locals.manager Completed an iteration

global-manager:
2014-04-09 13:00:24,526 DEBUG    neat.db Instantiated a Database object
2014-04-09 13:00:24,527 DEBUG    neat.db_utils Initialized a DB connection to mysql://neat:neatpassword@controller/neat
2014-04-09 13:00:24,533 DEBUG    neat.globals.manager Calling: ether-wake -i eth0 78:2b:cb:92:20:e0
2014-04-09 13:00:24,542 DEBUG    neat.globals.manager Calling: ether-wake -i eth0 78:2b:cb:92:20:ae
2014-04-09 13:00:24,545 INFO     neat.globals.manager Switched on hosts: ['compute1', 'compute2']
2014-04-09 13:00:24,610 INFO     neat.globals.manager Starting the global manager listening to controller:60080
2014-04-09 13:00:34,976 DEBUG    neat.globals.manager Request parameters validated
2014-04-09 13:00:34,976 INFO     neat.globals.manager Received a request from 10.100.64.28: {'username': 'd033e22ae348aeb5660fc2140aec35850c4da997', 'host': 'compute1', 'password': '1a1adce4ac87ab9a84ea61578519aa04bb191df1', 'reason': 0, 'time': 1397028656.8}
2014-04-09 13:00:34,977 INFO     neat.globals.manager Processing an underload of a host compute1
2014-04-09 13:00:34,977 INFO     neat.globals.manager Started processing an underload request
2014-04-09 13:00:34,992 INFO     urllib3.connectionpool Starting new HTTP connection (1): controller
2014-04-09 13:00:35,171 DEBUG    urllib3.connectionpool "POST /v2.0/tokens HTTP/1.1" 200 5887
2014-04-09 13:00:35,180 INFO     urllib3.connectionpool Starting new HTTP connection (1): 10.100.64.24
2014-04-09 13:00:35,266 DEBUG    urllib3.connectionpool "GET /v2/e7b93c32fd744c6788f34b0dc08f2045/servers/detail HTTP/1.1" 200 1430
2014-04-09 13:00:35,271 DEBUG    neat.globals.manager hosts_to_vms: {'compute1': ['2e5dcdb7-7486-42dc-8c43-83449e0c91a1'], 'compute2': []}
2014-04-09 13:00:35,275 INFO     urllib3.connectionpool Starting new HTTP connection (1): 10.100.64.24
2014-04-09 13:00:35,300 DEBUG    urllib3.connectionpool "GET /v2/e7b93c32fd744c6788f34b0dc08f2045/os-hosts/compute1 HTTP/1.1" 200 445
2014-04-09 13:00:35,301 DEBUG    neat.globals.manager Host CPU usage: {'compute1': 0, 'compute2': 10}
2014-04-09 13:00:35,302 DEBUG    neat.globals.manager Host total CPU usage: {'compute1': 100}
2014-04-09 13:00:35,302 DEBUG    neat.globals.manager Excluded the underloaded host compute1
2014-04-09 13:00:35,302 DEBUG    neat.globals.manager Host CPU usage: {'compute1': 0, 'compute2': 10}
2014-04-09 13:00:35,302 DEBUG    neat.globals.manager Host total CPU usage: {}
2014-04-09 13:00:35,302 INFO     urllib3.connectionpool Starting new HTTP connection (1): 10.100.64.24
2014-04-09 13:00:35,341 DEBUG    urllib3.connectionpool "GET /v2/e7b93c32fd744c6788f34b0dc08f2045/servers/detail HTTP/1.1" 200 1430
2014-04-09 13:00:35,345 INFO     urllib3.connectionpool Starting new HTTP connection (1): 10.100.64.24
2014-04-09 13:00:35,355 DEBUG    urllib3.connectionpool "GET /v2/e7b93c32fd744c6788f34b0dc08f2045/flavors/detail HTTP/1.1" 200 2089
2014-04-09 13:00:35,357 INFO     urllib3.connectionpool Starting new HTTP connection (1): 10.100.64.24
2014-04-09 13:00:35,394 DEBUG    urllib3.connectionpool "GET /v2/e7b93c32fd744c6788f34b0dc08f2045/servers/2e5dcdb7-7486-42dc-8c43-83449e0c91a1 HTTP/1.1" 200 1427
2014-04-09 13:00:35,413 INFO     neat.globals.manager Started underload VM placement
2014-04-09 13:00:35,413 INFO     neat.globals.manager Completed underload VM placement
2014-04-09 13:00:35,413 INFO     neat.globals.manager Underload: obtained a new placement {}
2014-04-09 13:00:35,416 INFO     neat.globals.manager Nothing to migrate
2014-04-09 13:00:35,416 DEBUG    neat.globals.manager Calling: ssh compute2 "pm-suspend"
2014-04-09 13:03:47,255 INFO     neat.globals.manager Switched off hosts: ['compute2']
2014-04-09 13:03:47,305 INFO     neat.globals.manager Completed processing an underload request

So I guess everything went fine thanks to you.

In 2nd Experiment: Both compute1 and compute2 were having single VM. On start of local-manager it requested for migration and global-manager started doing but it got stuck and migration didn't happen. logs are as follow:
local-manager.log(compute1):
2014-04-09 12:38:08,184 INFO     neat.locals.manager Underload detected
2014-04-09 12:38:08,193 INFO     requests.packages.urllib3.connectionpool Starting new HTTP connection (1): controller
2014-04-09 12:40:39,907 DEBUG    requests.packages.urllib3.connectionpool "PUT / HTTP/1.1" 500 59
2014-04-09 12:40:39,908 INFO     neat.locals.manager Received response: [500] A server error occurred.  Please contact the administrator.
2014-04-09 12:40:39,909 INFO     neat.locals.manager Completed an iteration

That 500 error is because of my keyboard interrupt.

global-manager:
2014-04-09 12:37:33,592 INFO     neat.globals.manager Starting the global manager listening to controller:60080
2014-04-09 12:37:46,393 DEBUG    neat.globals.manager Request parameters validated
2014-04-09 12:37:46,393 INFO     neat.globals.manager Received a request from 10.100.64.28: {'username': 'd033e22ae348aeb5660fc2140aec35850c4da997', 'host': 'compute1', 'password': '1a1adce4ac87ab9a84ea61578519aa04bb191df1', 'reason': 0, 'time': 1397027288.18}
2014-04-09 12:37:46,393 INFO     neat.globals.manager Processing an underload of a host compute1
2014-04-09 12:37:46,393 INFO     neat.globals.manager Started processing an underload request
2014-04-09 12:37:46,409 INFO     urllib3.connectionpool Starting new HTTP connection (1): controller
2014-04-09 12:37:46,583 DEBUG    urllib3.connectionpool "POST /v2.0/tokens HTTP/1.1" 200 5887
2014-04-09 12:37:46,592 INFO     urllib3.connectionpool Starting new HTTP connection (1): 10.100.64.24
2014-04-09 12:37:46,679 DEBUG    urllib3.connectionpool "GET /v2/e7b93c32fd744c6788f34b0dc08f2045/servers/detail HTTP/1.1" 200 2847
2014-04-09 12:37:46,685 DEBUG    neat.globals.manager hosts_to_vms: {'compute1': ['2e5dcdb7-7486-42dc-8c43-83449e0c91a1'], 'compute2': ['5647c473-5ca3-441b-868d-2d61dc96c57e']}
2014-04-09 12:37:46,689 INFO     urllib3.connectionpool Starting new HTTP connection (1): 10.100.64.24
2014-04-09 12:37:46,714 DEBUG    urllib3.connectionpool "GET /v2/e7b93c32fd744c6788f34b0dc08f2045/os-hosts/compute1 HTTP/1.1" 200 445
2014-04-09 12:37:46,716 INFO     urllib3.connectionpool Starting new HTTP connection (1): 10.100.64.24
2014-04-09 12:37:46,742 DEBUG    urllib3.connectionpool "GET /v2/e7b93c32fd744c6788f34b0dc08f2045/os-hosts/compute2 HTTP/1.1" 200 445
2014-04-09 12:37:46,742 DEBUG    neat.globals.manager Host CPU usage: {'compute1': 0, 'compute2': 0}
2014-04-09 12:37:46,742 DEBUG    neat.globals.manager Host total CPU usage: {'compute1': 99, 'compute2': 99}
2014-04-09 12:37:46,743 DEBUG    neat.globals.manager Excluded the underloaded host compute1
2014-04-09 12:37:46,743 DEBUG    neat.globals.manager Host CPU usage: {'compute1': 0, 'compute2': 0}
2014-04-09 12:37:46,743 DEBUG    neat.globals.manager Host total CPU usage: {'compute2': 99}
2014-04-09 12:37:46,743 INFO     urllib3.connectionpool Starting new HTTP connection (1): 10.100.64.24
2014-04-09 12:37:46,783 DEBUG    urllib3.connectionpool "GET /v2/e7b93c32fd744c6788f34b0dc08f2045/servers/detail HTTP/1.1" 200 2847
2014-04-09 12:37:46,786 INFO     urllib3.connectionpool Starting new HTTP connection (1): 10.100.64.24
2014-04-09 12:37:46,796 DEBUG    urllib3.connectionpool "GET /v2/e7b93c32fd744c6788f34b0dc08f2045/flavors/detail HTTP/1.1" 200 2089
2014-04-09 12:37:46,798 INFO     urllib3.connectionpool Starting new HTTP connection (1): 10.100.64.24
2014-04-09 12:37:46,836 DEBUG    urllib3.connectionpool "GET /v2/e7b93c32fd744c6788f34b0dc08f2045/servers/2e5dcdb7-7486-42dc-8c43-83449e0c91a1 HTTP/1.1" 200 1427
2014-04-09 12:37:46,854 INFO     neat.globals.manager Started underload VM placement
2014-04-09 12:37:46,859 INFO     neat.globals.manager Completed underload VM placement
2014-04-09 12:37:46,859 INFO     neat.globals.manager Underload: obtained a new placement {'2e5dcdb7-7486-42dc-8c43-83449e0c91a1': 'compute2'}
2014-04-09 12:37:46,862 INFO     neat.globals.manager Started underload VM migrations
2014-04-09 12:37:46,868 INFO     urllib3.connectionpool Starting new HTTP connection (1): 10.100.64.24
2014-04-09 12:37:47,188 DEBUG    urllib3.connectionpool "POST /v2/e7b93c32fd744c6788f34b0dc08f2045/servers/2e5dcdb7-7486-42dc-8c43-83449e0c91a1/action HTTP/1.1" 202 0
2014-04-09 12:37:47,189 INFO     neat.globals.manager Started migration of VM 2e5dcdb7-7486-42dc-8c43-83449e0c91a1 to compute2
2014-04-09 12:37:57,200 INFO     urllib3.connectionpool Starting new HTTP connection (1): 10.100.64.24
2014-04-09 12:37:57,242 DEBUG    urllib3.connectionpool "GET /v2/e7b93c32fd744c6788f34b0dc08f2045/servers/2e5dcdb7-7486-42dc-8c43-83449e0c91a1 HTTP/1.1" 200 1427
2014-04-09 12:37:57,243 DEBUG    neat.globals.manager VM 2e5dcdb7-7486-42dc-8c43-83449e0c91a1: compute1, ACTIVE
2014-04-09 12:38:00,248 INFO     urllib3.connectionpool Starting new HTTP connection (1): 10.100.64.24
2014-04-09 12:38:00,288 DEBUG    urllib3.connectionpool "GET /v2/e7b93c32fd744c6788f34b0dc08f2045/servers/2e5dcdb7-7486-42dc-8c43-83449e0c91a1 HTTP/1.1" 200 1427
2014-04-09 12:38:00,289 DEBUG    neat.globals.manager VM 2e5dcdb7-7486-42dc-8c43-83449e0c91a1: compute1, ACTIVE
2014-04-09 12:38:03,293 INFO     urllib3.connectionpool Starting new HTTP connection (1): 10.100.64.24
2014-04-09 12:38:03,341 DEBUG    urllib3.connectionpool "GET /v2/e7b93c32fd744c6788f34b0dc08f2045/servers/2e5dcdb7-7486-42dc-8c43-83449e0c91a1 HTTP/1.1" 200 1427
2014-04-09 12:38:03,341 DEBUG    neat.globals.manager VM 2e5dcdb7-7486-42dc-8c43-83449e0c91a1: compute1, ACTIVE
2014-04-09 12:38:06,344 INFO     urllib3.connectionpool Starting new HTTP connection (1): 10.100.64.24
2014-04-09 12:38:06,385 DEBUG    urllib3.connectionpool "GET /v2/e7b93c32fd744c6788f34b0dc08f2045/servers/2e5dcdb7-7486-42dc-8c43-83449e0c91a1 HTTP/1.1" 200 1427
2014-04-09 12:38:06,385 DEBUG    neat.globals.manager VM 2e5dcdb7-7486-42dc-8c43-83449e0c91a1: compute1, ACTIVE
2014-04-09 12:38:09,387 INFO     urllib3.connectionpool Starting new HTTP connection (1): 10.100.64.24
2014-04-09 12:38:09,427 DEBUG    urllib3.connectionpool "GET /v2/e7b93c32fd744c6788f34b0dc08f2045/servers/2e5dcdb7-7486-42dc-8c43-83449e0c91a1 HTTP/1.1" 200 1427
2014-04-09 12:38:09,428 DEBUG    neat.globals.manager VM 2e5dcdb7-7486-42dc-8c43-83449e0c91a1: compute1, ACTIVE
2014-04-09 12:38:12,432 INFO     urllib3.connectionpool Starting new HTTP connection (1): 10.100.64.24
2014-04-09 12:38:12,482 DEBUG    urllib3.connectionpool "GET /v2/e7b93c32fd744c6788f34b0dc08f2045/servers/2e5dcdb7-7486-42dc-8c43-83449e0c91a1 HTTP/1.1" 200 1427
2014-04-09 12:38:12,482 DEBUG    neat.globals.manager VM 2e5dcdb7-7486-42dc-8c43-83449e0c91a1: compute1, ACTIVE
2014-04-09 12:38:15,487 INFO     urllib3.connectionpool Starting new HTTP connection (1): 10.100.64.24
2014-04-09 12:38:15,529 DEBUG    urllib3.connectionpool "GET /v2/e7b93c32fd744c6788f34b0dc08f2045/servers/2e5dcdb7-7486-42dc-8c43-83449e0c91a1 HTTP/1.1" 200 1427
2014-04-09 12:38:15,530 DEBUG    neat.globals.manager VM 2e5dcdb7-7486-42dc-8c43-83449e0c91a1: compute1, ACTIVE
2014-04-09 12:38:18,535 INFO     urllib3.connectionpool Starting new HTTP connection (1): 10.100.64.24
2014-04-09 12:38:18,576 DEBUG    urllib3.connectionpool "GET /v2/e7b93c32fd744c6788f34b0dc08f2045/servers/2e5dcdb7-7486-42dc-8c43-83449e0c91a1 HTTP/1.1" 200 1427
2014-04-09 12:38:18,577 DEBUG    neat.globals.manager VM 2e5dcdb7-7486-42dc-8c43-83449e0c91a1: compute1, ACTIVE
2014-04-09 12:38:21,582 INFO     urllib3.connectionpool Starting new HTTP connection (1): 10.100.64.24
2014-04-09 12:38:21,634 DEBUG    urllib3.connectionpool "GET /v2/e7b93c32fd744c6788f34b0dc08f2045/servers/2e5dcdb7-7486-42dc-8c43-83449e0c91a1 HTTP/1.1" 200 1427
2014-04-09 12:38:21,634 DEBUG    neat.globals.manager VM 2e5dcdb7-7486-42dc-8c43-83449e0c91a1: compute1, ACTIVE
2014-04-09 12:38:24,639 INFO     urllib3.connectionpool Starting new HTTP connection (1): 10.100.64.24
2014-04-09 12:38:24,679 DEBUG    urllib3.connectionpool "GET /v2/e7b93c32fd744c6788f34b0dc08f2045/servers/2e5dcdb7-7486-42dc-8c43-83449e0c91a1 HTTP/1.1" 200 1427
2014-04-09 12:38:24,680 DEBUG    neat.globals.manager VM 2e5dcdb7-7486-42dc-8c43-83449e0c91a1: compute1, ACTIVE
2014-04-09 12:38:27,684 INFO     urllib3.connectionpool Starting new HTTP connection (1): 10.100.64.24
2014-04-09 12:38:27,729 DEBUG    urllib3.connectionpool "GET /v2/e7b93c32fd744c6788f34b0dc08f2045/servers/2e5dcdb7-7486-42dc-8c43-83449e0c91a1 HTTP/1.1" 200 1427
2014-04-09 12:38:27,730 DEBUG    neat.globals.manager VM 2e5dcdb7-7486-42dc-8c43-83449e0c91a1: compute1, ACTIVE
2014-04-09 12:38:30,734 INFO     urllib3.connectionpool Starting new HTTP connection (1): 10.100.64.24
2014-04-09 12:38:30,783 DEBUG    urllib3.connectionpool "GET /v2/e7b93c32fd744c6788f34b0dc08f2045/servers/2e5dcdb7-7486-42dc-8c43-83449e0c91a1 HTTP/1.1" 200 1427
2014-04-09 12:38:30,784 DEBUG    neat.globals.manager VM 2e5dcdb7-7486-42dc-8c43-83449e0c91a1: compute1, ACTIVE
2014-04-09 12:38:33,788 INFO     urllib3.connectionpool Starting new HTTP connection (1): 10.100.64.24
2014-04-09 12:38:33,830 DEBUG    urllib3.connectionpool "GET /v2/e7b93c32fd744c6788f34b0dc08f2045/servers/2e5dcdb7-7486-42dc-8c43-83449e0c91a1 HTTP/1.1" 200 1427
2014-04-09 12:38:33,831 DEBUG    neat.globals.manager VM 2e5dcdb7-7486-42dc-8c43-83449e0c91a1: compute1, ACTIVE
2014-04-09 12:38:36,835 INFO     urllib3.connectionpool Starting new HTTP connection (1): 10.100.64.24
2014-04-09 12:38:36,877 DEBUG    urllib3.connectionpool "GET /v2/e7b93c32fd744c6788f34b0dc08f2045/servers/2e5dcdb7-7486-42dc-8c43-83449e0c91a1 HTTP/1.1" 200 1427
2014-04-09 12:38:36,878 DEBUG    neat.globals.manager VM 2e5dcdb7-7486-42dc-8c43-83449e0c91a1: compute1, ACTIVE
2014-04-09 12:38:39,882 INFO     urllib3.connectionpool Starting new HTTP connection (1): 10.100.64.24
2014-04-09 12:38:40,016 DEBUG    urllib3.connectionpool "GET /v2/e7b93c32fd744c6788f34b0dc08f2045/servers/2e5dcdb7-7486-42dc-8c43-83449e0c91a1 HTTP/1.1" 200 1427
2014-04-09 12:38:40,017 DEBUG    neat.globals.manager VM 2e5dcdb7-7486-42dc-8c43-83449e0c91a1: compute1, ACTIVE
2014-04-09 12:38:43,021 INFO     urllib3.connectionpool Starting new HTTP connection (1): 10.100.64.24
2014-04-09 12:38:43,059 DEBUG    urllib3.connectionpool "GET /v2/e7b93c32fd744c6788f34b0dc08f2045/servers/2e5dcdb7-7486-42dc-8c43-83449e0c91a1 HTTP/1.1" 200 1427
2014-04-09 12:38:43,059 DEBUG    neat.globals.manager VM 2e5dcdb7-7486-42dc-8c43-83449e0c91a1: compute1, ACTIVE
2014-04-09 12:38:46,064 INFO     urllib3.connectionpool Starting new HTTP connection (1): 10.100.64.24
2014-04-09 12:38:46,108 DEBUG    urllib3.connectionpool "GET /v2/e7b93c32fd744c6788f34b0dc08f2045/servers/2e5dcdb7-7486-42dc-8c43-83449e0c91a1 HTTP/1.1" 200 1427
2014-04-09 12:38:46,109 DEBUG    neat.globals.manager VM 2e5dcdb7-7486-42dc-8c43-83449e0c91a1: compute1, ACTIVE
2014-04-09 12:38:49,114 INFO     urllib3.connectionpool Starting new HTTP connection (1): 10.100.64.24
2014-04-09 12:38:49,160 DEBUG    urllib3.connectionpool "GET /v2/e7b93c32fd744c6788f34b0dc08f2045/servers/2e5dcdb7-7486-42dc-8c43-83449e0c91a1 HTTP/1.1" 200 1427
2014-04-09 12:38:49,160 DEBUG    neat.globals.manager VM 2e5dcdb7-7486-42dc-8c43-83449e0c91a1: compute1, ACTIVE
2014-04-09 12:38:52,165 INFO     urllib3.connectionpool Starting new HTTP connection (1): 10.100.64.24
2014-04-09 12:38:52,209 DEBUG    urllib3.connectionpool "GET /v2/e7b93c32fd744c6788f34b0dc08f2045/servers/2e5dcdb7-7486-42dc-8c43-83449e0c91a1 HTTP/1.1" 200 1427
2014-04-09 12:38:52,210 DEBUG    neat.globals.manager VM 2e5dcdb7-7486-42dc-8c43-83449e0c91a1: compute1, ACTIVE
2014-04-09 12:38:55,214 INFO     urllib3.connectionpool Starting new HTTP connection (1): 10.100.64.24
2014-04-09 12:38:55,258 DEBUG    urllib3.connectionpool "GET /v2/e7b93c32fd744c6788f34b0dc08f2045/servers/2e5dcdb7-7486-42dc-8c43-83449e0c91a1 HTTP/1.1" 200 1427
2014-04-09 12:38:55,259 DEBUG    neat.globals.manager VM 2e5dcdb7-7486-42dc-8c43-83449e0c91a1: compute1, ACTIVE
2014-04-09 12:38:58,263 INFO     urllib3.connectionpool Starting new HTTP connection (1): 10.100.64.24
2014-04-09 12:38:58,307 DEBUG    urllib3.connectionpool "GET /v2/e7b93c32fd744c6788f34b0dc08f2045/servers/2e5dcdb7-7486-42dc-8c43-83449e0c91a1 HTTP/1.1" 200 1427
2014-04-09 12:38:58,308 DEBUG    neat.globals.manager VM 2e5dcdb7-7486-42dc-8c43-83449e0c91a1: compute1, ACTIVE
2014-04-09 12:39:01,312 INFO     urllib3.connectionpool Starting new HTTP connection (1): 10.100.64.24
2014-04-09 12:39:01,355 DEBUG    urllib3.connectionpool "GET /v2/e7b93c32fd744c6788f34b0dc08f2045/servers/2e5dcdb7-7486-42dc-8c43-83449e0c91a1 HTTP/1.1" 200 1427
2014-04-09 12:39:01,356 DEBUG    neat.globals.manager VM 2e5dcdb7-7486-42dc-8c43-83449e0c91a1: compute1, ACTIVE
2014-04-09 12:39:04,360 INFO     urllib3.connectionpool Starting new HTTP connection (1): 10.100.64.24
2014-04-09 12:39:04,409 DEBUG    urllib3.connectionpool "GET /v2/e7b93c32fd744c6788f34b0dc08f2045/servers/2e5dcdb7-7486-42dc-8c43-83449e0c91a1 HTTP/1.1" 200 1427
2014-04-09 12:39:04,410 DEBUG    neat.globals.manager VM 2e5dcdb7-7486-42dc-8c43-83449e0c91a1: compute1, ACTIVE
2014-04-09 12:39:07,414 INFO     urllib3.connectionpool Starting new HTTP connection (1): 10.100.64.24
2014-04-09 12:39:07,455 DEBUG    urllib3.connectionpool "GET /v2/e7b93c32fd744c6788f34b0dc08f2045/servers/2e5dcdb7-7486-42dc-8c43-83449e0c91a1 HTTP/1.1" 200 1427
2014-04-09 12:39:07,456 DEBUG    neat.globals.manager VM 2e5dcdb7-7486-42dc-8c43-83449e0c91a1: compute1, ACTIVE
2014-04-09 12:39:10,460 INFO     urllib3.connectionpool Starting new HTTP connection (1): 10.100.64.24
2014-04-09 12:39:10,501 DEBUG    urllib3.connectionpool "GET /v2/e7b93c32fd744c6788f34b0dc08f2045/servers/2e5dcdb7-7486-42dc-8c43-83449e0c91a1 HTTP/1.1" 200 1427
2014-04-09 12:39:10,502 DEBUG    neat.globals.manager VM 2e5dcdb7-7486-42dc-8c43-83449e0c91a1: compute1, ACTIVE
2014-04-09 12:39:13,506 INFO     urllib3.connectionpool Starting new HTTP connection (1): 10.100.64.24
2014-04-09 12:39:13,550 DEBUG    urllib3.connectionpool "GET /v2/e7b93c32fd744c6788f34b0dc08f2045/servers/2e5dcdb7-7486-42dc-8c43-83449e0c91a1 HTTP/1.1" 200 1427
2014-04-09 12:39:13,551 DEBUG    neat.globals.manager VM 2e5dcdb7-7486-42dc-8c43-83449e0c91a1: compute1, ACTIVE
2014-04-09 12:39:16,555 INFO     urllib3.connectionpool Starting new HTTP connection (1): 10.100.64.24
2014-04-09 12:39:16,597 DEBUG    urllib3.connectionpool "GET /v2/e7b93c32fd744c6788f34b0dc08f2045/servers/2e5dcdb7-7486-42dc-8c43-83449e0c91a1 HTTP/1.1" 200 1427
2014-04-09 12:39:16,598 DEBUG    neat.globals.manager VM 2e5dcdb7-7486-42dc-8c43-83449e0c91a1: compute1, ACTIVE
2014-04-09 12:39:19,602 INFO     urllib3.connectionpool Starting new HTTP connection (1): 10.100.64.24
2014-04-09 12:39:19,649 DEBUG    urllib3.connectionpool "GET /v2/e7b93c32fd744c6788f34b0dc08f2045/servers/2e5dcdb7-7486-42dc-8c43-83449e0c91a1 HTTP/1.1" 200 1427
2014-04-09 12:39:19,650 DEBUG    neat.globals.manager VM 2e5dcdb7-7486-42dc-8c43-83449e0c91a1: compute1, ACTIVE
2014-04-09 12:39:22,654 INFO     urllib3.connectionpool Starting new HTTP connection (1): 10.100.64.24
2014-04-09 12:39:22,704 DEBUG    urllib3.connectionpool "GET /v2/e7b93c32fd744c6788f34b0dc08f2045/servers/2e5dcdb7-7486-42dc-8c43-83449e0c91a1 HTTP/1.1" 200 1427
2014-04-09 12:39:22,705 DEBUG    neat.globals.manager VM 2e5dcdb7-7486-42dc-8c43-83449e0c91a1: compute1, ACTIVE
2014-04-09 12:39:25,709 INFO     urllib3.connectionpool Starting new HTTP connection (1): 10.100.64.24
2014-04-09 12:39:25,761 DEBUG    urllib3.connectionpool "GET /v2/e7b93c32fd744c6788f34b0dc08f2045/servers/2e5dcdb7-7486-42dc-8c43-83449e0c91a1 HTTP/1.1" 200 1427
2014-04-09 12:39:25,761 DEBUG    neat.globals.manager VM 2e5dcdb7-7486-42dc-8c43-83449e0c91a1: compute1, ACTIVE
2014-04-09 12:39:28,766 INFO     urllib3.connectionpool Starting new HTTP connection (1): 10.100.64.24
2014-04-09 12:39:28,809 DEBUG    urllib3.connectionpool "GET /v2/e7b93c32fd744c6788f34b0dc08f2045/servers/2e5dcdb7-7486-42dc-8c43-83449e0c91a1 HTTP/1.1" 200 1427
2014-04-09 12:39:28,810 DEBUG    neat.globals.manager VM 2e5dcdb7-7486-42dc-8c43-83449e0c91a1: compute1, ACTIVE
2014-04-09 12:39:31,814 INFO     urllib3.connectionpool Starting new HTTP connection (1): 10.100.64.24
2014-04-09 12:39:31,863 DEBUG    urllib3.connectionpool "GET /v2/e7b93c32fd744c6788f34b0dc08f2045/servers/2e5dcdb7-7486-42dc-8c43-83449e0c91a1 HTTP/1.1" 200 1427
2014-04-09 12:39:31,864 DEBUG    neat.globals.manager VM 2e5dcdb7-7486-42dc-8c43-83449e0c91a1: compute1, ACTIVE
2014-04-09 12:39:34,868 INFO     urllib3.connectionpool Starting new HTTP connection (1): 10.100.64.24
2014-04-09 12:39:34,909 DEBUG    urllib3.connectionpool "GET /v2/e7b93c32fd744c6788f34b0dc08f2045/servers/2e5dcdb7-7486-42dc-8c43-83449e0c91a1 HTTP/1.1" 200 1427
2014-04-09 12:39:34,910 DEBUG    neat.globals.manager VM 2e5dcdb7-7486-42dc-8c43-83449e0c91a1: compute1, ACTIVE
2014-04-09 12:39:37,914 INFO     urllib3.connectionpool Starting new HTTP connection (1): 10.100.64.24
2014-04-09 12:39:37,958 DEBUG    urllib3.connectionpool "GET /v2/e7b93c32fd744c6788f34b0dc08f2045/servers/2e5dcdb7-7486-42dc-8c43-83449e0c91a1 HTTP/1.1" 200 1427
2014-04-09 12:39:37,959 DEBUG    neat.globals.manager VM 2e5dcdb7-7486-42dc-8c43-83449e0c91a1: compute1, ACTIVE
2014-04-09 12:39:40,964 INFO     urllib3.connectionpool Starting new HTTP connection (1): 10.100.64.24
2014-04-09 12:39:41,010 DEBUG    urllib3.connectionpool "GET /v2/e7b93c32fd744c6788f34b0dc08f2045/servers/2e5dcdb7-7486-42dc-8c43-83449e0c91a1 HTTP/1.1" 200 1427
2014-04-09 12:39:41,010 DEBUG    neat.globals.manager VM 2e5dcdb7-7486-42dc-8c43-83449e0c91a1: compute1, ACTIVE
2014-04-09 12:39:44,015 INFO     urllib3.connectionpool Starting new HTTP connection (1): 10.100.64.24
2014-04-09 12:39:44,055 DEBUG    urllib3.connectionpool "GET /v2/e7b93c32fd744c6788f34b0dc08f2045/servers/2e5dcdb7-7486-42dc-8c43-83449e0c91a1 HTTP/1.1" 200 1427
2014-04-09 12:39:44,056 DEBUG    neat.globals.manager VM 2e5dcdb7-7486-42dc-8c43-83449e0c91a1: compute1, ACTIVE
2014-04-09 12:39:47,060 INFO     urllib3.connectionpool Starting new HTTP connection (1): 10.100.64.24
2014-04-09 12:39:47,101 DEBUG    urllib3.connectionpool "GET /v2/e7b93c32fd744c6788f34b0dc08f2045/servers/2e5dcdb7-7486-42dc-8c43-83449e0c91a1 HTTP/1.1" 200 1427
2014-04-09 12:39:47,102 DEBUG    neat.globals.manager VM 2e5dcdb7-7486-42dc-8c43-83449e0c91a1: compute1, ACTIVE
2014-04-09 12:39:50,104 INFO     urllib3.connectionpool Starting new HTTP connection (1): 10.100.64.24
2014-04-09 12:39:50,152 DEBUG    urllib3.connectionpool "GET /v2/e7b93c32fd744c6788f34b0dc08f2045/servers/2e5dcdb7-7486-42dc-8c43-83449e0c91a1 HTTP/1.1" 200 1427
2014-04-09 12:39:50,153 DEBUG    neat.globals.manager VM 2e5dcdb7-7486-42dc-8c43-83449e0c91a1: compute1, ACTIVE
2014-04-09 12:39:53,157 INFO     urllib3.connectionpool Starting new HTTP connection (1): 10.100.64.24
2014-04-09 12:39:53,205 DEBUG    urllib3.connectionpool "GET /v2/e7b93c32fd744c6788f34b0dc08f2045/servers/2e5dcdb7-7486-42dc-8c43-83449e0c91a1 HTTP/1.1" 200 1427
2014-04-09 12:39:53,206 DEBUG    neat.globals.manager VM 2e5dcdb7-7486-42dc-8c43-83449e0c91a1: compute1, ACTIVE
2014-04-09 12:39:56,210 INFO     urllib3.connectionpool Starting new HTTP connection (1): 10.100.64.24
2014-04-09 12:39:56,254 DEBUG    urllib3.connectionpool "GET /v2/e7b93c32fd744c6788f34b0dc08f2045/servers/2e5dcdb7-7486-42dc-8c43-83449e0c91a1 HTTP/1.1" 200 1427
2014-04-09 12:39:56,255 DEBUG    neat.globals.manager VM 2e5dcdb7-7486-42dc-8c43-83449e0c91a1: compute1, ACTIVE
2014-04-09 12:39:59,256 INFO     urllib3.connectionpool Starting new HTTP connection (1): 10.100.64.24
2014-04-09 12:39:59,301 DEBUG    urllib3.connectionpool "GET /v2/e7b93c32fd744c6788f34b0dc08f2045/servers/2e5dcdb7-7486-42dc-8c43-83449e0c91a1 HTTP/1.1" 200 1427
2014-04-09 12:39:59,302 DEBUG    neat.globals.manager VM 2e5dcdb7-7486-42dc-8c43-83449e0c91a1: compute1, ACTIVE
2014-04-09 12:40:02,306 INFO     urllib3.connectionpool Starting new HTTP connection (1): 10.100.64.24
2014-04-09 12:40:02,350 DEBUG    urllib3.connectionpool "GET /v2/e7b93c32fd744c6788f34b0dc08f2045/servers/2e5dcdb7-7486-42dc-8c43-83449e0c91a1 HTTP/1.1" 200 1427
2014-04-09 12:40:02,351 DEBUG    neat.globals.manager VM 2e5dcdb7-7486-42dc-8c43-83449e0c91a1: compute1, ACTIVE
2014-04-09 12:40:05,355 INFO     urllib3.connectionpool Starting new HTTP connection (1): 10.100.64.24
2014-04-09 12:40:05,398 DEBUG    urllib3.connectionpool "GET /v2/e7b93c32fd744c6788f34b0dc08f2045/servers/2e5dcdb7-7486-42dc-8c43-83449e0c91a1 HTTP/1.1" 200 1427
2014-04-09 12:40:05,399 DEBUG    neat.globals.manager VM 2e5dcdb7-7486-42dc-8c43-83449e0c91a1: compute1, ACTIVE
2014-04-09 12:40:08,403 INFO     urllib3.connectionpool Starting new HTTP connection (1): 10.100.64.24
2014-04-09 12:40:08,451 DEBUG    urllib3.connectionpool "GET /v2/e7b93c32fd744c6788f34b0dc08f2045/servers/2e5dcdb7-7486-42dc-8c43-83449e0c91a1 HTTP/1.1" 200 1427
2014-04-09 12:40:08,452 DEBUG    neat.globals.manager VM 2e5dcdb7-7486-42dc-8c43-83449e0c91a1: compute1, ACTIVE
2014-04-09 12:40:11,456 INFO     urllib3.connectionpool Starting new HTTP connection (1): 10.100.64.24
2014-04-09 12:40:11,493 DEBUG    urllib3.connectionpool "GET /v2/e7b93c32fd744c6788f34b0dc08f2045/servers/2e5dcdb7-7486-42dc-8c43-83449e0c91a1 HTTP/1.1" 200 1427
2014-04-09 12:40:11,494 DEBUG    neat.globals.manager VM 2e5dcdb7-7486-42dc-8c43-83449e0c91a1: compute1, ACTIVE
2014-04-09 12:40:14,496 INFO     urllib3.connectionpool Starting new HTTP connection (1): 10.100.64.24
2014-04-09 12:40:14,534 DEBUG    urllib3.connectionpool "GET /v2/e7b93c32fd744c6788f34b0dc08f2045/servers/2e5dcdb7-7486-42dc-8c43-83449e0c91a1 HTTP/1.1" 200 1427
2014-04-09 12:40:14,535 DEBUG    neat.globals.manager VM 2e5dcdb7-7486-42dc-8c43-83449e0c91a1: compute1, ACTIVE
2014-04-09 12:40:17,539 INFO     urllib3.connectionpool Starting new HTTP connection (1): 10.100.64.24
2014-04-09 12:40:17,668 DEBUG    urllib3.connectionpool "GET /v2/e7b93c32fd744c6788f34b0dc08f2045/servers/2e5dcdb7-7486-42dc-8c43-83449e0c91a1 HTTP/1.1" 200 1427
2014-04-09 12:40:17,669 DEBUG    neat.globals.manager VM 2e5dcdb7-7486-42dc-8c43-83449e0c91a1: compute1, ACTIVE

 After that I stopped services.

So my question is why it didn't get migrate? and how much long do I have to wait for migration to happen?

Thanking You,
Kashyap

Anton Beloglazov

unread,
Apr 9, 2014, 8:40:31 PM4/9/14
to opensta...@googlegroups.com
Hi Kashyap,

The migration hasn't even started. The state of the VM was ACTIVE all the time. Normally it should transition to MIGRATING soon after the initiation of migration. There could be some OpenStack or KVM configuration issues that prevent live migration. One way to test it is to try to manually migrate a VM first as described here: http://docs.openstack.org/grizzly/openstack-compute/admin/content/live-migration-usage.html If that works, then just try again with Neat, sometimes randoms errors happen. 

In this particular case, the issue wasn't on Neat's side, as all it does is calls Nova's Python API to initiate a migration and then monitors the state of the VM. If the VM could not transition in the MIGRATING state, then the migration failed on OpenStack's side for some reason. You could also inspect Nova's log files to see the actual reason for the failure.

Best regards,
Anton


Reply all
Reply to author
Forward
0 new messages