release sprint, 0.5.0 hopefully

75 views
Skip to first unread message

josef...@gmail.com

unread,
Jun 24, 2013, 11:44:01 AM6/24/13
to pystatsmodels
I'm planning to spend my available time during this week on release
preparation, working down the list of pull requests and urgent issues.

Everyone is invited to help and contribute.

check the documentation, the examples, pylint, bugs, interface, ...
I don't know if anyone ever looked at the plots on python 3.x

pull requests would be welcome, and I will look at any of them and
hopefully merge them within a day.

Last weekend I merged Quantile Regression (Vincent) and the fast lowess (Carl).
They are available for some workout.
quantile regression has a notebook in the main examples folder.

Building from master requires cython and a c compiler.

I will finish up version compatibility testing today or tomorrow.

Josef

Yaroslav Halchenko

unread,
Jun 24, 2013, 11:53:59 AM6/24/13
to pystat...@googlegroups.com
yeay -- good luck!

let me know when I should kick it to do a test build across debians

Cheers,
--
Yaroslav O. Halchenko, Ph.D.
http://neuro.debian.net http://www.pymvpa.org http://www.fail2ban.org
Senior Research Associate, Psychological and Brain Sciences Dept.
Dartmouth College, 419 Moore Hall, Hinman Box 6207, Hanover, NH 03755
Phone: +1 (603) 646-9834 Fax: +1 (603) 646-1419
WWW: http://www.linkedin.com/in/yarik

josef...@gmail.com

unread,
Jun 24, 2013, 12:02:21 PM6/24/13
to pystat...@googlegroups.com
On Mon, Jun 24, 2013 at 11:53 AM, Yaroslav Halchenko
<yarik...@gmail.com> wrote:
> yeay -- good luck!
>
> let me know when I should kick it to do a test build across debians

Thanks, I will let you know, most likely next week.
We were always keeping TravisCI happy, but I still have some "noisy"
tests in some of my virtualenvs, and on pythonxy Ubuntu testing.

Josef

josef...@gmail.com

unread,
Jun 25, 2013, 10:35:36 AM6/25/13
to pystat...@googlegroups.com
On Mon, Jun 24, 2013 at 12:02 PM, <josef...@gmail.com> wrote:
> On Mon, Jun 24, 2013 at 11:53 AM, Yaroslav Halchenko
> <yarik...@gmail.com> wrote:
>> yeay -- good luck!
>>
>> let me know when I should kick it to do a test build across debians
>
> Thanks, I will let you know, most likely next week.
> We were always keeping TravisCI happy, but I still have some "noisy"
> tests in some of my virtualenvs, and on pythonxy Ubuntu testing.

I just merged the last changes for version compatibility (as far as I
have them available)

test suite runs without errors or failures
on python 2.6 numpy 1.5.1 pandas 0.7.3 scipy 0.9.0 and
on python 3.3 with the newest released versions of numpy, pandas, scipy
on python 2.7 and python 3.2, tested by TravisCI

Skipper fixed the build with the Windows SDK

some tests are disabled with older versions of pandas, and a recent
pandas is required to use those parts
- date handling in StataReader, Writer and
- parts of graphics mosaic_plot

(there are many version combinations possible that are not tested.)

Josef

Yaroslav Halchenko

unread,
Jun 25, 2013, 10:51:24 AM6/25/13
to pystat...@googlegroups.com


On Tue, 25 Jun 2013, josef...@gmail.com wrote:

> On Mon, Jun 24, 2013 at 12:02 PM, <josef...@gmail.com> wrote:
> > On Mon, Jun 24, 2013 at 11:53 AM, Yaroslav Halchenko
> > <yarik...@gmail.com> wrote:
> >> yeay -- good luck!

> >> let me know when I should kick it to do a test build across debians

> > Thanks, I will let you know, most likely next week.
> > We were always keeping TravisCI happy, but I still have some "noisy"
> > tests in some of my virtualenvs, and on pythonxy Ubuntu testing.

> I just merged the last changes for version compatibility (as far as I
> have them available)

> test suite runs without errors or failures
> on python 2.6 numpy 1.5.1 pandas 0.7.3 scipy 0.9.0 and
> on python 3.3 with the newest released versions of numpy, pandas, scipy
> on python 2.7 and python 3.2, tested by TravisCI

> Skipper fixed the build with the Windows SDK

> some tests are disabled with older versions of pandas, and a recent
> pandas is required to use those parts
> - date handling in StataReader, Writer and
> - parts of graphics mosaic_plot

> (there are many version combinations possible that are not tested.)

is it a next week already? ;)

Skipper Seabold

unread,
Jun 25, 2013, 11:03:49 AM6/25/13
to pystat...@googlegroups.com
On Tue, Jun 25, 2013 at 10:51 AM, Yaroslav Halchenko
<yarik...@gmail.com> wrote:
>
>
> On Tue, 25 Jun 2013, josef...@gmail.com wrote:
>
>> On Mon, Jun 24, 2013 at 12:02 PM, <josef...@gmail.com> wrote:
>> > On Mon, Jun 24, 2013 at 11:53 AM, Yaroslav Halchenko
>> > <yarik...@gmail.com> wrote:
>> >> yeay -- good luck!
>
>> >> let me know when I should kick it to do a test build across debians
>
>> > Thanks, I will let you know, most likely next week.
>> > We were always keeping TravisCI happy, but I still have some "noisy"
>> > tests in some of my virtualenvs, and on pythonxy Ubuntu testing.
>
>> I just merged the last changes for version compatibility (as far as I
>> have them available)
>
>> test suite runs without errors or failures
>> on python 2.6 numpy 1.5.1 pandas 0.7.3 scipy 0.9.0 and
>> on python 3.3 with the newest released versions of numpy, pandas, scipy
>> on python 2.7 and python 3.2, tested by TravisCI
>
>> Skipper fixed the build with the Windows SDK
>
>> some tests are disabled with older versions of pandas, and a recent
>> pandas is required to use those parts
>> - date handling in StataReader, Writer and
>> - parts of graphics mosaic_plot
>
>> (there are many version combinations possible that are not tested.)
>
> is it a next week already? ;)
>

Let me know anything I need to look at. I can set aside some time to
work on this.

I'm going to be looking at Vincent's panel data stuff in the coming
weeks, as I need it for some spatial panel stuff right now that's not
available anywhere else AFAICT.

Skipper

josef...@gmail.com

unread,
Jun 25, 2013, 11:20:50 AM6/25/13
to pystat...@googlegroups.com
On Tue, Jun 25, 2013 at 10:51 AM, Yaroslav Halchenko
<yarik...@gmail.com> wrote:
>
>
> On Tue, 25 Jun 2013, josef...@gmail.com wrote:
>
>> On Mon, Jun 24, 2013 at 12:02 PM, <josef...@gmail.com> wrote:
>> > On Mon, Jun 24, 2013 at 11:53 AM, Yaroslav Halchenko
>> > <yarik...@gmail.com> wrote:
>> >> yeay -- good luck!
>
>> >> let me know when I should kick it to do a test build across debians
>
>> > Thanks, I will let you know, most likely next week.
>> > We were always keeping TravisCI happy, but I still have some "noisy"
>> > tests in some of my virtualenvs, and on pythonxy Ubuntu testing.
>
>> I just merged the last changes for version compatibility (as far as I
>> have them available)
>
>> test suite runs without errors or failures
>> on python 2.6 numpy 1.5.1 pandas 0.7.3 scipy 0.9.0 and
>> on python 3.3 with the newest released versions of numpy, pandas, scipy
>> on python 2.7 and python 3.2, tested by TravisCI
>
>> Skipper fixed the build with the Windows SDK
>
>> some tests are disabled with older versions of pandas, and a recent
>> pandas is required to use those parts
>> - date handling in StataReader, Writer and
>> - parts of graphics mosaic_plot
>
>> (there are many version combinations possible that are not tested.)
>
> is it a next week already? ;)

not really, I'm looking at high priority issues next.

I don't expect any additional version/platform problem from what else
needs to go in.

However, I don't know yet what to do with the facet plot PR.
It has some great new plots, and I would like to "throw" it at users
to get some feedback. (It's difficult to test whether plots look
good.)
But it requires some commit history cleaning, it has some hairy
unicode and will require at least a full day to get it into a mergable
state. And neither TravisCI nor most of my virtualenvs have matplotlib
to do the version testing.

Except for the uncertainty about facet plot, master is ready for wider
testing in the Debian machinery.

your choice to start now or next week.

Josef

Vincent Arel

unread,
Jun 25, 2013, 11:23:16 AM6/25/13
to pystat...@googlegroups.com
I will probably have a little time to invest in this if you need me to. The panel stuff is still very early and ugly, but I think that some of the ideas in the Grouping class could be neat if cleaned up.

Vincent
 

Skipper

Ralf Gommers

unread,
Jun 25, 2013, 3:29:04 PM6/25/13
to pystat...@googlegroups.com
I still get one error and nine failures, plus a bunch of noise in the test output. So not right now please...

This is on 32-bit Ubuntu, Python 2.7, Numpy 1.7.1, Scipy master:


$ python -c "import statsmodels as s; s.test('full')"
Running unit tests for statsmodels
/home/rgommers/.local/lib/python2.7/site-packages/nose/plugins/manager.py:418: UserWarning: Module statsmodels was already imported from /home/rgommers/Code/statsmodels/statsmodels/__init__.py, but /usr/lib/pymodules/python2.7 is being added to sys.path
  import pkg_resources
NumPy version 1.7.1
NumPy is installed in /usr/lib/python2.7/dist-packages/numpy
Python version 2.7.4 (default, Apr 19 2013, 18:32:33) [GCC 4.7.3]
nose version 1.3.0
/home/rgommers/Code/statsmodels/statsmodels/tsa/vector_ar/var_model.py:339: FutureWarning: The names argument is deprecated and will be removed in the next release.
  "removed in the next release.", FutureWarning)
.........................................................................................................................S.....S.......................................................................S..............................................................................................................E...........................................................FFFFFFF......................................................................................QC check did not pass for 4 out of 4 parameters
Try increasing solver accuracy or number of iterations, decreasing alpha, or switch solvers
F........................................................................................................................................................................................................................................................................Fontconfig warning: "/etc/fonts/conf.d/50-user.conf", line 9: reading configurations from ~/.fonts.conf is deprecated.
..............................can't invoke "event" command:  application has been destroyed
    while executing
"event generate $w <<ThemeChanged>>"
    (procedure "ttk::ThemeChanged" line 6)
    invoked from within
"ttk::ThemeChanged"
....can't invoke "event" command:  application has been destroyed                                        
    while executing                                                                                      
"event generate $w <<ThemeChanged>>"                                                                     
    (procedure "ttk::ThemeChanged" line 6)
    invoked from within
"ttk::ThemeChanged"
...can't invoke "event" command:  application has been destroyed
    while executing
"event generate $w <<ThemeChanged>>"
    (procedure "ttk::ThemeChanged" line 6)
    invoked from within
"ttk::ThemeChanged"
..can't invoke "event" command:  application has been destroyed
    while executing
"event generate $w <<ThemeChanged>>"
    (procedure "ttk::ThemeChanged" line 6)
    invoked from within
"ttk::ThemeChanged"
.............................................................S....F............................................................................................................................................................................................................................................................................................................................................................................................................................................................................S..........S........S...S..........S........S..S........S...S..........S........S..S.......S...S..................................................................................................................../home/rgommers/Code/statsmodels/statsmodels/stats/correlation_tools.py:82: UserWarning: maximum iteration reached
  warnings.warn('maximum iteration reached')
.......................................................................................................................................................................................................................................................................................................................................................................................................................................................S..............................................................................no overlap, power is zero, TODO
no overlap, power is zero, TODO
no overlap, power is zero, TODO
..no overlap, power is zero, TODO
no overlap, power is zero, TODO
no overlap, power is zero, TODO
no overlap, power is zero, TODO
no overlap, power is zero, TODO
......./home/rgommers/Code/scipy/scipy/stats/stats.py:1292: UserWarning: kurtosistest only valid for n>=20 ... continuing anyway, n=15
  int(n))
..........................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................Optimization terminated successfully.
         Current function value: 5.327270
         Iterations: 316
         Function evaluations: 661
..........................................................
======================================================================
ERROR: test suite for <class 'statsmodels.discrete.tests.test_discrete.TestPoissonL1Compatability'>
----------------------------------------------------------------------
Traceback (most recent call last):
  File "/home/rgommers/.local/lib/python2.7/site-packages/nose/suite.py", line 208, in run
    self.setUp()
  File "/home/rgommers/.local/lib/python2.7/site-packages/nose/suite.py", line 291, in setUp
    self.setupContext(ancestor)
  File "/home/rgommers/.local/lib/python2.7/site-packages/nose/suite.py", line 314, in setupContext
    try_run(context, names)
  File "/home/rgommers/.local/lib/python2.7/site-packages/nose/util.py", line 469, in try_run
    return func()
  File "/home/rgommers/Code/statsmodels/statsmodels/discrete/tests/test_discrete.py", line 561, in setupClass
    trim_mode='auto')
  File "/home/rgommers/Code/statsmodels/statsmodels/discrete/discrete_model.py", line 756, in fit_regularized
    size_trim_tol=size_trim_tol, qc_tol=qc_tol, **kwargs)
  File "/home/rgommers/Code/statsmodels/statsmodels/discrete/discrete_model.py", line 312, in fit_regularized
    cov_params_func=cov_params_func, **kwargs)
  File "/home/rgommers/Code/statsmodels/statsmodels/base/model.py", line 330, in fit
    Hinv = cov_params_func(self, xopt, retvals)
  File "/home/rgommers/Code/statsmodels/statsmodels/discrete/discrete_model.py", line 332, in cov_params_func_l1
    H_restricted_inv = np.linalg.inv(-H_restricted)
  File "/usr/lib/python2.7/dist-packages/numpy/linalg/linalg.py", line 445, in inv
    return wrap(solve(a, identity(a.shape[0], dtype=a.dtype)))
  File "/usr/lib/python2.7/dist-packages/numpy/linalg/linalg.py", line 328, in solve
    raise LinAlgError('Singular matrix')
LinAlgError: Singular matrix

======================================================================
FAIL: statsmodels.discrete.tests.test_discrete.TestProbitL1.test_aic
----------------------------------------------------------------------
Traceback (most recent call last):
  File "/home/rgommers/.local/lib/python2.7/site-packages/nose/case.py", line 197, in runTest
    self.test(*self.arg)
  File "/home/rgommers/Code/statsmodels/statsmodels/discrete/tests/test_discrete.py", line 393, in test_aic
    self.res1.aic, self.res2.aic, DECIMAL_3)
  File "/usr/lib/python2.7/dist-packages/numpy/testing/utils.py", line 468, in assert_almost_equal
    raise AssertionError(msg)
AssertionError:
Arrays are not almost equal to 3 decimals
 ACTUAL: 44.361419555836498
 DESIRED: 38.39977387754293

======================================================================
FAIL: statsmodels.discrete.tests.test_discrete.TestProbitL1.test_bic
----------------------------------------------------------------------
Traceback (most recent call last):
  File "/home/rgommers/.local/lib/python2.7/site-packages/nose/case.py", line 197, in runTest
    self.test(*self.arg)
  File "/home/rgommers/Code/statsmodels/statsmodels/discrete/tests/test_discrete.py", line 397, in test_bic
    self.res1.bic, self.res2.bic, DECIMAL_3)
  File "/usr/lib/python2.7/dist-packages/numpy/testing/utils.py", line 468, in assert_almost_equal
    raise AssertionError(msg)
AssertionError:
Arrays are not almost equal to 3 decimals
 ACTUAL: 44.361419555836498
 DESIRED: 42.796981585942106

======================================================================
FAIL: statsmodels.discrete.tests.test_discrete.TestProbitL1.test_bse
----------------------------------------------------------------------
Traceback (most recent call last):
  File "/home/rgommers/.local/lib/python2.7/site-packages/nose/case.py", line 197, in runTest
    self.test(*self.arg)
  File "/home/rgommers/Code/statsmodels/statsmodels/discrete/tests/test_discrete.py", line 385, in test_bse
    assert_almost_equal(self.res1.bse, self.res2.bse, DECIMAL_4)
  File "/usr/lib/python2.7/dist-packages/numpy/testing/utils.py", line 452, in assert_almost_equal
    return assert_array_almost_equal(actual, desired, decimal, err_msg)
  File "/usr/lib/python2.7/dist-packages/numpy/testing/utils.py", line 812, in assert_array_almost_equal
    header=('Arrays are not almost equal to %d decimals' % decimal))
  File "/usr/lib/python2.7/dist-packages/numpy/testing/utils.py", line 608, in assert_array_compare
    chk_same_position(x_isnan, y_isnan, hasval='nan')
  File "/usr/lib/python2.7/dist-packages/numpy/testing/utils.py", line 588, in chk_same_position
    raise AssertionError(msg)
AssertionError:
Arrays are not almost equal to 4 decimals

x and y nan location mismatch:
 x: array([ nan,  nan,  nan,  nan])
 y: array([ 2.05922641,  0.61889778,  0.07383875,         nan])

======================================================================
FAIL: statsmodels.discrete.tests.test_discrete.TestProbitL1.test_conf_int
----------------------------------------------------------------------
Traceback (most recent call last):
  File "/home/rgommers/.local/lib/python2.7/site-packages/nose/case.py", line 197, in runTest
    self.test(*self.arg)
  File "/home/rgommers/Code/statsmodels/statsmodels/discrete/tests/test_discrete.py", line 382, in test_conf_int
    self.res1.conf_int(), self.res2.conf_int, DECIMAL_4)
  File "/usr/lib/python2.7/dist-packages/numpy/testing/utils.py", line 452, in assert_almost_equal
    return assert_array_almost_equal(actual, desired, decimal, err_msg)
  File "/usr/lib/python2.7/dist-packages/numpy/testing/utils.py", line 812, in assert_array_almost_equal
    header=('Arrays are not almost equal to %d decimals' % decimal))
  File "/usr/lib/python2.7/dist-packages/numpy/testing/utils.py", line 608, in assert_array_compare
    chk_same_position(x_isnan, y_isnan, hasval='nan')
  File "/usr/lib/python2.7/dist-packages/numpy/testing/utils.py", line 588, in chk_same_position
    raise AssertionError(msg)
AssertionError:
Arrays are not almost equal to 4 decimals

x and y nan location mismatch:
 x: array([[ nan,  nan],
       [ nan,  nan],
       [ nan,  nan],
       [ nan,  nan]])
 y: array([[-9.44077951, -1.36876033],
       [ 0.03716721,  2.46320194],
       [-0.09727571,  0.19216687],
       [        nan,         nan]])

======================================================================
FAIL: statsmodels.discrete.tests.test_discrete.TestProbitL1.test_cov_params
----------------------------------------------------------------------
Traceback (most recent call last):
  File "/home/rgommers/.local/lib/python2.7/site-packages/nose/case.py", line 197, in runTest
    self.test(*self.arg)
  File "/home/rgommers/Code/statsmodels/statsmodels/discrete/tests/test_discrete.py", line 415, in test_cov_params
    self.res1.cov_params(), self.res2.cov_params, DECIMAL_4)
  File "/usr/lib/python2.7/dist-packages/numpy/testing/utils.py", line 452, in assert_almost_equal
    return assert_array_almost_equal(actual, desired, decimal, err_msg)
  File "/usr/lib/python2.7/dist-packages/numpy/testing/utils.py", line 812, in assert_array_almost_equal
    header=('Arrays are not almost equal to %d decimals' % decimal))
  File "/usr/lib/python2.7/dist-packages/numpy/testing/utils.py", line 608, in assert_array_compare
    chk_same_position(x_isnan, y_isnan, hasval='nan')
  File "/usr/lib/python2.7/dist-packages/numpy/testing/utils.py", line 588, in chk_same_position
    raise AssertionError(msg)
AssertionError:
Arrays are not almost equal to 4 decimals

x and y nan location mismatch:
 x: array([[ nan,  nan,  nan,  nan],
       [ nan,  nan,  nan,  nan],
       [ nan,  nan,  nan,  nan],
       [ nan,  nan,  nan,  nan]])
 y: array([[ 4.24041339, -0.83432592, -0.06827915,         nan],
       [-0.83432592,  0.38303447, -0.01700249,         nan],
       [-0.06827915, -0.01700249,  0.00545216,         nan],
       [        nan,         nan,         nan,         nan]])

======================================================================
FAIL: statsmodels.discrete.tests.test_discrete.TestProbitL1.test_nnz_params
----------------------------------------------------------------------
Traceback (most recent call last):
  File "/home/rgommers/.local/lib/python2.7/site-packages/nose/case.py", line 197, in runTest
    self.test(*self.arg)
  File "/home/rgommers/Code/statsmodels/statsmodels/discrete/tests/test_discrete.py", line 389, in test_nnz_params
    self.res1.nnz_params, self.res2.nnz_params, DECIMAL_4)
  File "/usr/lib/python2.7/dist-packages/numpy/testing/utils.py", line 468, in assert_almost_equal
    raise AssertionError(msg)
AssertionError:
Arrays are not almost equal to 4 decimals
 ACTUAL: 0
 DESIRED: 3

======================================================================
FAIL: statsmodels.discrete.tests.test_discrete.TestProbitL1.test_params
----------------------------------------------------------------------
Traceback (most recent call last):
  File "/home/rgommers/.local/lib/python2.7/site-packages/nose/case.py", line 197, in runTest
    self.test(*self.arg)
  File "/home/rgommers/Code/statsmodels/statsmodels/discrete/tests/test_discrete.py", line 378, in test_params
    assert_almost_equal(self.res1.params, self.res2.params, DECIMAL_4)
  File "/usr/lib/python2.7/dist-packages/numpy/testing/utils.py", line 452, in assert_almost_equal
    return assert_array_almost_equal(actual, desired, decimal, err_msg)
  File "/usr/lib/python2.7/dist-packages/numpy/testing/utils.py", line 812, in assert_array_almost_equal
    header=('Arrays are not almost equal to %d decimals' % decimal))
  File "/usr/lib/python2.7/dist-packages/numpy/testing/utils.py", line 645, in assert_array_compare
    raise AssertionError(msg)
AssertionError:
Arrays are not almost equal to 4 decimals

(mismatch 75.0%)
 x: array([ 0.,  0.,  0.,  0.])
 y: array([-5.40476992,  1.25018458,  0.04744558,  0.        ])

======================================================================
FAIL: statsmodels.discrete.tests.test_discrete.TestSweepAlphaL1.test_sweep_alpha
----------------------------------------------------------------------
Traceback (most recent call last):
  File "/home/rgommers/.local/lib/python2.7/site-packages/nose/case.py", line 197, in runTest
    self.test(*self.arg)
  File "/home/rgommers/Code/statsmodels/statsmodels/discrete/tests/test_discrete.py", line 493, in test_sweep_alpha
    assert_almost_equal(res2.params, self.res1.params[i], DECIMAL_4)
  File "/usr/lib/python2.7/dist-packages/numpy/testing/utils.py", line 452, in assert_almost_equal
    return assert_array_almost_equal(actual, desired, decimal, err_msg)
  File "/usr/lib/python2.7/dist-packages/numpy/testing/utils.py", line 812, in assert_array_almost_equal
    header=('Arrays are not almost equal to %d decimals' % decimal))
  File "/usr/lib/python2.7/dist-packages/numpy/testing/utils.py", line 645, in assert_array_compare
    raise AssertionError(msg)
AssertionError:
Arrays are not almost equal to 4 decimals

(mismatch 100.0%)
 x: array([-4235.77774671,  1040.75041255,     5.34093705,   936.86867127])
 y: array([-10.37593611,   2.27080968,   0.06670638,   2.05723691])

======================================================================
FAIL: statsmodels.nonparametric.tests.test_lowess.TestLowess.test_delta
----------------------------------------------------------------------
Traceback (most recent call last):
  File "/home/rgommers/.local/lib/python2.7/site-packages/nose/case.py", line 197, in runTest
    self.test(*self.arg)
  File "/home/rgommers/Code/statsmodels/statsmodels/nonparametric/tests/test_lowess.py", line 92, in test_delta
    assert_almost_equal(expected_lowess_del1, actual_lowess_del1, decimal = testdec)
  File "/usr/lib/python2.7/dist-packages/numpy/testing/utils.py", line 452, in assert_almost_equal
    return assert_array_almost_equal(actual, desired, decimal, err_msg)
  File "/usr/lib/python2.7/dist-packages/numpy/testing/utils.py", line 812, in assert_array_almost_equal
    header=('Arrays are not almost equal to %d decimals' % decimal))
  File "/usr/lib/python2.7/dist-packages/numpy/testing/utils.py", line 645, in assert_array_compare
    raise AssertionError(msg)
AssertionError:
Arrays are not almost equal to 7 decimals

(mismatch 0.375939849624%)
 x: array([[  2.40000000e+00,  -1.05527936e+00],
       [  2.60000000e+00,  -1.12048007e+00],
       [  3.20000000e+00,  -1.31608219e+00],...
 y: array([[  2.40000000e+00,  -1.05527935e+00],
       [  2.60000000e+00,  -1.12048006e+00],
       [  3.20000000e+00,  -1.31608219e+00],...

----------------------------------------------------------------------
Ran 2771 tests in 299.708s

FAILED (SKIP=19, errors=1, failures=9)

josef...@gmail.com

unread,
Jun 25, 2013, 3:50:15 PM6/25/13
to pystat...@googlegroups.com
did fmin_slsqp change in scipy master?

All except for the last failure (lowess) are in L1 optimization which
are in statsmodels already for a long time and haven't had any
failures or errors like this.

the lowess test failure looks like a precision error, I have not much
idea about where it might come from.

Josef

josef...@gmail.com

unread,
Jun 25, 2013, 3:56:27 PM6/25/13
to pystat...@googlegroups.com
>>
>> I still get one error and nine failures, plus a bunch of noise in the test
>> output. So not right now please...

I don't get most of the test noise when I run tox, but I had opened an
issue when I saw the noise in the pythonxy Ubuntu testing.

I haven't looked at the noise yet.

Thanks for testing

Josef

Skipper Seabold

unread,
Jun 25, 2013, 5:08:13 PM6/25/13
to pystat...@googlegroups.com
On Tue, Jun 25, 2013 at 3:29 PM, Ralf Gommers <ralf.g...@gmail.com> wrote:
> I still get one error and nine failures, plus a bunch of noise in the test
> output. So not right now please...
>
> This is on 32-bit Ubuntu, Python 2.7, Numpy 1.7.1, Scipy master:
>
>
> $ python -c "import statsmodels as s; s.test('full')"
> Running unit tests for statsmodels
> /home/rgommers/.local/lib/python2.7/site-packages/nose/plugins/manager.py:418:
> UserWarning: Module statsmodels was already imported from
> /home/rgommers/Code/statsmodels/statsmodels/__init__.py, but
> /usr/lib/pymodules/python2.7 is being added to sys.path
> import pkg_resources
> NumPy version 1.7.1
> NumPy is installed in /usr/lib/python2.7/dist-packages/numpy
> Python version 2.7.4 (default, Apr 19 2013, 18:32:33) [GCC 4.7.3]
> nose version 1.3.0
> /home/rgommers/Code/statsmodels/statsmodels/tsa/vector_ar/var_model.py:339:
> FutureWarning: The names argument is deprecated and will be removed in the
> next release.
> "removed in the next release.", FutureWarning)
> .........................................................................................................................S.....S.......................................................................S..............................................................................................................E...........................................................FFFFFFF......................................................................................QC
> check did not pass for 4 out of 4 parameters
> Try increasing solver accuracy or number of iterations, decreasing alpha, or
> switch solvers

I see this warning and these failures on scipy master but not scipy
from a few months ago. See below.
Do you see what test these happen on? What version of MPL?
I don't get these errors or failures on scipy d701022 from mid-April.
There have been two commits that touched slsqp.py since then in
master.

https://github.com/scipy/scipy/commit/6941f5c5502057402b3a769ba048c1ded1389a7c#scipy/optimize/slsqp.py
https://github.com/scipy/scipy/commit/113e571563f686d405e2a9d3390b3415f0ed3f37#scipy/optimize/slsqp.py

lowess looks like the precision is too high for 32-bit.

Skipper

Ralf Gommers

unread,
Jun 25, 2013, 5:23:59 PM6/25/13
to pystat...@googlegroups.com
I don't get these errors or failures on scipy d701022 from mid-April.
There have been two commits that touched slsqp.py since then in
master.

https://github.com/scipy/scipy/commit/6941f5c5502057402b3a769ba048c1ded1389a7c#scipy/optimize/slsqp.py
https://github.com/scipy/scipy/commit/113e571563f686d405e2a9d3390b3415f0ed3f37#scipy/optimize/slsqp.py

0.11.x also works for me, may indeed be a scipy issue. I'll investigate.
 

lowess looks like the precision is too high for 32-bit.

That doesn't seem to be it, I changed testdec from 7 to 4 and TestLowess.test_delta is still failing.

MPL version is 1.2.1.

Ralf

josef...@gmail.com

unread,
Jun 25, 2013, 7:36:07 PM6/25/13
to pystat...@googlegroups.com
On Tue, Jun 25, 2013 at 3:29 PM, Ralf Gommers <ralf.g...@gmail.com> wrote:

>
My guess is a precision issue with inequality comparison, but don't
know why only on your machine so far.

Can you try to change delta to 1 + 1e-10 in the test?

line 88 in test_lowess.py
actual_lowess_del1 = lowess(test_data['y'], test_data['x'], frac =
0.1, delta = 1.0 + 1e-10)

the tests still pass for me if I do this change

E:\tmp>nosetests --pdb-failures
e:\josef\eclipsegworkspace\statsmodels-git\statsmodels-all-new2\statsmodels\statsmodels\nonparametric\tests\test_lowess.py
C:\Python26\Scripts\nosetests-script.py:5: UserWarning: Module
pkg_resources was already imported from
C:\Python26\lib\site-packages\pkg_resources.pyc, but
c:\python26\lib\site-packages\distribute-0.6.24-py2.6.egg is being
added to sys.path
from pkg_resources import load_entry_point
C:\Python26\Scripts\nosetests-script.py:5: UserWarning: Module site
was already imported from C:\Python26\lib\site.pyc, but
c:\python26\lib\site-packages\distribute-0.6.24-py2.6.egg is being
added to sys.path
from pkg_resources import load_entry_point
> c:\python26\lib\site-packages\numpy\testing\utils.py(618)assert_array_compare()
-> raise AssertionError(msg)

precision is much higher that decimal 7 on my computer

(Pdb) import numpy as np
(Pdb) np.max(np.abs(y - x))
5.5422333389287814e-13
(Pdb) np.max(np.abs(x / y - 1))
1.3222756223285614e-12
(Pdb) u
> c:\python26\lib\site-packages\numpy\testing\utils.py(774)assert_array_almost_equal()
-> header='Arrays are not almost equal')
(Pdb) u
> c:\python26\lib\site-packages\numpy\testing\utils.py(447)assert_almost_equal()
-> return assert_array_almost_equal(actual, desired, decimal, err_msg)
(Pdb) u
> e:\josef\eclipsegworkspace\statsmodels-git\statsmodels-all-new2\statsmodels\statsmodels\nonparametric\tests\test_lowess.py(92)test_delta()
-> assert_almost_equal(expected_lowess_del1, actual_lowess_del1,
decimal = 15) #testdec)

(Pdb) np.min(np.abs(np.diff(expected_lowess_del1[:,0]) - 1))
0.0

delta increments for delta=1.0 fall exactly at the observations.

Josef

Tom Augspurger

unread,
Jun 26, 2013, 9:08:51 AM6/26/13
to pystat...@googlegroups.com


On Monday, June 24, 2013 10:44:01 AM UTC-5, josefpktd wrote:

I will finish up version compatibility testing today or tomorrow.

Josef

I ran the tests on a Mac (OS 10.8).  Would you like me to post those results here or in a new thread?  I had 41 errors and 11 failures.  Most of the errors came from the ARIMA section. I think in the setup so that's why none of them passed.

josef...@gmail.com

unread,
Jun 26, 2013, 9:39:04 AM6/26/13
to pystat...@googlegroups.com
Thank you for testing.

Please start a new thread, there might be more setup problems coming.

Josef

josef...@gmail.com

unread,
Jun 26, 2013, 9:48:58 AM6/26/13
to pystatsmodels
source archives and windows binaries from the automatic nightly build
are here http://statsmodels.sourceforge.net/binaries/

the source distribution requires a c compiler but not cython
the windows binaries are compiled against numpy 1.7.1 (I think)

These are the versions from the build script that will be used to
upload to pypi and sourceforge.
So I hope these install without problems, and if not we need to fix it
until they do.

We would appreciate testing of these.

I'm still looking at some issues, but we are getting close.

Josef

Ralf Gommers

unread,
Jun 26, 2013, 5:16:41 PM6/26/13
to pystat...@googlegroups.com
On Tue, Jun 25, 2013 at 11:23 PM, Ralf Gommers <ralf.g...@gmail.com> wrote:
On Tue, Jun 25, 2013 at 11:08 PM, Skipper Seabold <jsse...@gmail.com> wrote:
On Tue, Jun 25, 2013 at 3:29 PM, Ralf Gommers <ralf.g...@gmail.com> wrote:
> I still get one error and nine failures, plus a bunch of noise in the test
> output. So not right now please...
>
> This is on 32-bit Ubuntu, Python 2.7, Numpy 1.7.1, Scipy master:


I don't get these errors or failures on scipy d701022 from mid-April.
There have been two commits that touched slsqp.py since then in
master.

https://github.com/scipy/scipy/commit/6941f5c5502057402b3a769ba048c1ded1389a7c#scipy/optimize/slsqp.py
https://github.com/scipy/scipy/commit/113e571563f686d405e2a9d3390b3415f0ed3f37#scipy/optimize/slsqp.py

0.11.x also works for me, may indeed be a scipy issue. I'll investigate.

The issue is related to different Fortran compile args used by Bento vs. distutils. It smells like a subtle bug in the slsqp Fortran code. So not a statsmodels issue.

 

lowess looks like the precision is too high for 32-bit.

That doesn't seem to be it, I changed testdec from 7 to 4 and TestLowess.test_delta is still failing.

This one is fixed now, thanks. So all good here except for a little bit of test noise.

Cheers,
Ralf

josef...@gmail.com

unread,
Jun 26, 2013, 5:40:21 PM6/26/13
to pystat...@googlegroups.com
On Wed, Jun 26, 2013 at 5:16 PM, Ralf Gommers <ralf.g...@gmail.com> wrote:
>
>
>
> On Tue, Jun 25, 2013 at 11:23 PM, Ralf Gommers <ralf.g...@gmail.com>
> wrote:
>>
>>
>>
>>
>> On Tue, Jun 25, 2013 at 11:08 PM, Skipper Seabold <jsse...@gmail.com>
>> wrote:
>>>
>>> On Tue, Jun 25, 2013 at 3:29 PM, Ralf Gommers <ralf.g...@gmail.com>
>>> wrote:
>>> > I still get one error and nine failures, plus a bunch of noise in the
>>> > test
>>> > output. So not right now please...
>>> >
>>> > This is on 32-bit Ubuntu, Python 2.7, Numpy 1.7.1, Scipy master:
>>>
>>>
>>> I don't get these errors or failures on scipy d701022 from mid-April.
>>> There have been two commits that touched slsqp.py since then in
>>> master.
>>>
>>>
>>> https://github.com/scipy/scipy/commit/6941f5c5502057402b3a769ba048c1ded1389a7c#scipy/optimize/slsqp.py
>>>
>>> https://github.com/scipy/scipy/commit/113e571563f686d405e2a9d3390b3415f0ed3f37#scipy/optimize/slsqp.py
>>
>>
>> 0.11.x also works for me, may indeed be a scipy issue. I'll investigate.
>
>
> The issue is related to different Fortran compile args used by Bento vs.
> distutils. It smells like a subtle bug in the slsqp Fortran code. So not a
> statsmodels issue.

Thanks for following up on this.

>
>>
>>>
>>>
>>> lowess looks like the precision is too high for 32-bit.
>>
>>
>> That doesn't seem to be it, I changed testdec from 7 to 4 and
>> TestLowess.test_delta is still failing.
>
>
> This one is fixed now, thanks. So all good here except for a little bit of
> test noise.

good, never trust floating point numbers, when in doubt add 1e-10 or so :)

Essentially only one set of failures with numpy master left.

Josef

>
> Cheers,
> Ralf
>
Reply all
Reply to author
Forward
0 new messages