Working towards a 1.3.0 release...

20 views
Skip to first unread message

John Szakmeister

unread,
Jan 22, 2013, 10:19:28 AM1/22/13
to nose-dev
Hi,

I just wanted to let everyone know where I'm at right now. I believe
that most everything that I thought was important is on master now. I
have 2 pull requests hanging out there at the moment:

* One that adds some resource fixes. [2]
* One that updates the changelog. [1]

There was one more that I thought would be nice to get in, #595:
https://github.com/nose-devs/nose/pull/595. I've looked at it, and it
looks good to me... but it'd be nice if someone else could review it
too. #603 (https://github.com/nose-devs/nose/pull/603) is also
interesting. It makes integration with testtools nicer, though I
question who is at fault here. ISTM, that test tools should be
providing a valid exc_info... but it's be nice to hear what others
think.

Jason: I hope you don't mind that I've been actively trying to tackle
things. Hopefully, you feel it's all within the same spirit and
culture you've established. If you don't, feel free to point that
out. Given that, I think things are shaping up pretty nicely. What
are the next steps?

-John


[1]: <https://github.com/nose-devs/nose/pull/607>
[2]: <https://github.com/nose-devs/nose/pull/608>

jason pellerin

unread,
Jan 22, 2013, 2:20:36 PM1/22/13
to nose...@googlegroups.com
I don't mind at all! It's a great help to have you taking the lead on getting the release ready -- I've had even less time than usual lately. But I'll try to review the pull requests you mention this week so I don't hold up progress too much.

Thank you!

JP


--
You received this message because you are subscribed to the Google Groups "nose-dev" group.
To post to this group, send email to nose...@googlegroups.com.
To unsubscribe from this group, send email to nose-dev+u...@googlegroups.com.
For more options, visit this group at http://groups.google.com/group/nose-dev?hl=en.


John Szakmeister

unread,
Jan 23, 2013, 6:27:33 AM1/23/13
to nose...@googlegroups.com
On Tue, Jan 22, 2013 at 2:20 PM, jason pellerin <jpel...@gmail.com> wrote:
> I don't mind at all! It's a great help to have you taking the lead on
> getting the release ready -- I've had even less time than usual lately. But
> I'll try to review the pull requests you mention this week so I don't hold
> up progress too much.

Sounds great!

> Thank you!

You're welcome!

-John

John Szakmeister

unread,
Feb 9, 2013, 6:33:44 AM2/9/13
to nose-dev
On Tue, Jan 22, 2013 at 10:19 AM, John Szakmeister <jo...@szakmeister.net> wrote:
> Hi,
>
> I just wanted to let everyone know where I'm at right now. I believe
> that most everything that I thought was important is on master now. I
> have 2 pull requests hanging out there at the moment:
>
> * One that adds some resource fixes. [2]
> * One that updates the changelog. [1]
>
> There was one more that I thought would be nice to get in, #595:
> https://github.com/nose-devs/nose/pull/595. I've looked at it, and it
> looks good to me... but it'd be nice if someone else could review it
> too. #603 (https://github.com/nose-devs/nose/pull/603) is also
> interesting. It makes integration with testtools nicer, though I
> question who is at fault here. ISTM, that test tools should be
> providing a valid exc_info... but it's be nice to hear what others
> think.

I took some time yesterday to get some remaining bits merged. I'm
pretty happy with things as they stand.

There's one pull request, #553, that I had been considering, but I'm
on the fence about. The PR fixes the error output when a KeyError is
raised. Unfortunately, I'm not sure I like the technique though.
It'd be nice to get other eyes on it, or just punt for this release.
The only reason I'm reluctant to do the latter is that it does fix a
long standing issue.

BTW, one of my changes introduced some breakage for
TestConcurrentShared. I have no clue how I missed it since I'm pretty
certain I ran it under PyPy, 3.3, and 2.7... but I did. I went ahead
and committed a fix to master yesterday. Sorry about that, and I'll
try to be more diligent in the future.

-John

John Szakmeister

unread,
Feb 9, 2013, 6:34:36 AM2/9/13
to nose-dev
On Sat, Feb 9, 2013 at 6:33 AM, John Szakmeister <jo...@szakmeister.net> wrote:
[snip]
> There's one pull request, #553, that I had been considering, but I'm
> on the fence about. The PR fixes the error output when a KeyError is
> raised. Unfortunately, I'm not sure I like the technique though.
> It'd be nice to get other eyes on it, or just punt for this release.
> The only reason I'm reluctant to do the latter is that it does fix a
> long standing issue.

I meant to include the url to the pull request:
<https://github.com/nose-devs/nose/pull/553>

-John

John Szakmeister

unread,
Feb 11, 2013, 5:35:19 AM2/11/13
to nose-dev
On Sat, Feb 9, 2013 at 6:33 AM, John Szakmeister <jo...@szakmeister.net> wrote:
[snip]
> There's one pull request, #553, that I had been considering, but I'm
> on the fence about. The PR fixes the error output when a KeyError is
> raised. Unfortunately, I'm not sure I like the technique though.
> It'd be nice to get other eyes on it, or just punt for this release.
> The only reason I'm reluctant to do the latter is that it does fix a
> long standing issue.

This PR has been merged, and the CHANGELOG updated.

What are the next steps Jason?

-John

jason pellerin

unread,
Feb 11, 2013, 9:09:01 AM2/11/13
to nose...@googlegroups.com
I'll look things over locally, then (assuming everything looks ok), I'll put the release up on pypi. Today if time permits, but it probably won't, so probably sometime in the next few days.

Thanks again for organizing this release and for doing all of the work.

JP


-John

--
You received this message because you are subscribed to the Google Groups "nose-dev" group.
To unsubscribe from this group and stop receiving emails from it, send an email to nose-dev+u...@googlegroups.com.

To post to this group, send email to nose...@googlegroups.com.

jason pellerin

unread,
Feb 11, 2013, 10:34:35 AM2/11/13
to nose...@googlegroups.com
Something is not quite right. Under py33 I'm finding that the keyboard interrupt test is locking up consistently. I thought you'd already fixed that? Did something not get merged, maybe?

JP

John Szakmeister

unread,
Feb 11, 2013, 7:03:22 PM2/11/13
to nose...@googlegroups.com
On Mon, Feb 11, 2013 at 10:34 AM, jason pellerin <jpel...@gmail.com> wrote:
> Something is not quite right. Under py33 I'm finding that the keyboard
> interrupt test is locking up consistently. I thought you'd already fixed
> that? Did something not get merged, maybe?

I haven't seen anything of that sort for a while now. I don't
remember fixing a KeyboardInterrupt problem, though the
TestConcurrentShared test I busted for a short while (I added a check
to skip it when under PyPy, but then didn't call the base class's
setUp() routine). This for that is on master though.

The only other big Python 3.3 fix was an infinite recursion issue.
That is on master as well. The rest were cleanups for the doctests
since the output is slightly different between 2.x and 3.x.

Which test is failing?

-John

jason pellerin

unread,
Feb 12, 2013, 5:12:53 PM2/12/13
to nose...@googlegroups.com
tox -e py33 for me sometimes results in the test process getting to tests/functional_tests/test_multiprocessing/test_keyboardinterrupt.py and then sticking at the point where it's waiting for the remote process to communicate (line 56). I haven't been able to reproduce that today, though.

JP


-John

John Szakmeister

unread,
Feb 13, 2013, 6:34:26 AM2/13/13
to nose...@googlegroups.com


On Feb 12, 2013 5:12 PM, "jason pellerin" <jpel...@gmail.com> wrote:
>
> tox -e py33 for me sometimes results in the test process getting to tests/functional_tests/test_multiprocessing/test_keyboardinterrupt.py and then sticking at the point where it's waiting for the remote process to communicate (line 56). I haven't been able to reproduce that today, though.

I ran the tests 25 times with no issue. :-(

-John

jason pellerin

unread,
Feb 13, 2013, 9:04:32 AM2/13/13
to nose...@googlegroups.com
I wonder if it was some kind of 2to3-related problem on my end, since I haven't been able to reproduce it since bumping nose's version # and the tox environment and running a full tox run. Puzzling.

-John

--

John Szakmeister

unread,
Feb 14, 2013, 5:16:12 AM2/14/13
to nose...@googlegroups.com
On Wed, Feb 13, 2013 at 9:04 AM, jason pellerin <jpel...@gmail.com> wrote:
> I wonder if it was some kind of 2to3-related problem on my end, since I
> haven't been able to reproduce it since bumping nose's version # and the tox
> environment and running a full tox run. Puzzling.

I was going to ask if you think it was some leftover cruft somewhere.
It certainly sounds like it. Those kinds of issues are always a bit
unnerving because you can never be too sure. :-(

-John

jason pellerin

unread,
Feb 14, 2013, 9:01:08 AM2/14/13
to nose...@googlegroups.com
I think at this point since we can't be sure and one sometimes-failing test is no worse than the several others we have, it's time to cut the release. So I'll try to work on the NEWS file and get everything squared away for that today.

Thanks again!

JP


-John

John Szakmeister

unread,
Feb 14, 2013, 9:21:24 AM2/14/13
to nose...@googlegroups.com
On Thu, Feb 14, 2013 at 9:01 AM, jason pellerin <jpel...@gmail.com> wrote:
> I think at this point since we can't be sure and one sometimes-failing test
> is no worse than the several others we have, it's time to cut the release.
> So I'll try to work on the NEWS file and get everything squared away for
> that today.

Sounds good Jason!

> Thanks again!

You're very welcome!

-John

jason pellerin

unread,
Feb 19, 2013, 9:16:04 AM2/19/13
to nose...@googlegroups.com
Status as of today:

- NEWS file is written
- 3.3 test failure came back, once
- Fixed an issue that prevented tests from running under 2.4 at all, but there are still numerous failures and errors under 2.4 now. Do we care enough about that to delay the release?

JP


-John

John Szakmeister

unread,
Feb 19, 2013, 10:32:41 AM2/19/13
to nose...@googlegroups.com
On Tue, Feb 19, 2013 at 9:16 AM, jason pellerin <jpel...@gmail.com> wrote:
> Status as of today:
>
> - NEWS file is written
> - 3.3 test failure came back, once
> - Fixed an issue that prevented tests from running under 2.4 at all, but
> there are still numerous failures and errors under 2.4 now. Do we care
> enough about that to delay the release?

To be honest, I think it'a a good time to shed Python 2.4 support. I
know RH 5.x uses it, but it's on the tail end of support. So if you
are concerned about the loss of 2.4 support, I'd say drop it and move
on.

FWIW, I don't think Travis support 2.4 either, which is probably why
issues haven't been noticed until now.

-John

jason pellerin

unread,
Feb 23, 2013, 8:53:58 AM2/23/13
to nose...@googlegroups.com
Some of the 2.4 problems are just in the tests, but a few would make it impossible to actually run under 2.4 -- I fixed those, and fixed a problem in how py33 was set up in tox.ini. And now, of course, the keyboard interrupt test is locking up every time. :/

So, looks like no release this week. Sorry.

JP


-John

John Szakmeister

unread,
Feb 23, 2013, 9:37:26 AM2/23/13
to nose...@googlegroups.com
On Sat, Feb 23, 2013 at 8:53 AM, jason pellerin <jpel...@gmail.com> wrote:
> Some of the 2.4 problems are just in the tests, but a few would make it
> impossible to actually run under 2.4 -- I fixed those, and fixed a problem
> in how py33 was set up in tox.ini. And now, of course, the keyboard
> interrupt test is locking up every time. :/

I'm still not seeing it. :-( What are you running on? Did you build
Python 3.3 yourself? I'm just curious about some of the details of
your setup and whether or not I can re-create it the issue some other
way.

> So, looks like no release this week. Sorry.

No worries. I may look at getting a couple of other pull requests
merged... in particular the coverage one that fixes a broken regexp.

-John

jason pellerin

unread,
Feb 25, 2013, 9:16:12 AM2/25/13
to nose...@googlegroups.com
I'm using python3.3. from the deadsnakes PPA on ubuntu 12.04. 

It seems like some kind of output buffering problem to me -- I can also trigger it by read()ing from process.stderr -- if I make the read() amount big enough. "Big enough" varies by python version, what blocks on 3.3 (say, read(3000)) passes on 3.2.

JP


-John

John Szakmeister

unread,
Mar 3, 2013, 4:16:58 PM3/3/13
to nose...@googlegroups.com
On Mon, Feb 25, 2013 at 9:16 AM, jason pellerin <jpel...@gmail.com> wrote:
> I'm using python3.3. from the deadsnakes PPA on ubuntu 12.04.
>
> It seems like some kind of output buffering problem to me -- I can also
> trigger it by read()ing from process.stderr -- if I make the read() amount
> big enough. "Big enough" varies by python version, what blocks on 3.3 (say,
> read(3000)) passes on 3.2.

So I was able to reproduce this in my Ubuntu 12.04 VM. It's
definitely the kind of issue you mention. I used the patch below, and
have managed to run just the mp keyboard interrupt tests 61 times in a
row now without an issue. This patch is not ready to incorporate yet.
Some of it could be structured better, and I want to make sure things
are getting cleaned up.

This gist has the patch too:
https://gist.github.com/jszakmeister/5078271. Do you mind giving it a
try and see if it fixes your issue. I may get some time tomorrow to
clean it up more.

One thing to note: without the extra sleep(0.1) before issuing the
SIGINT, I found that the test was often killed prematurely. I think
this is just another example of how racy these tests are. :-( What do
you think about having the test touch a file and check for the
existence of that as a busy-wait? I don't think waiting for the
existence of the logfile is good enough.

-John

diff --git a/functional_tests/test_multiprocessing/test_keyboardinterrupt.py
b/functional_tests/test_multiprocessing/test_keyboardinterrupt.py
index 8f07e54..86452a9 100644
--- a/functional_tests/test_multiprocessing/test_keyboardinterrupt.py
+++ b/functional_tests/test_multiprocessing/test_keyboardinterrupt.py
@@ -25,9 +25,12 @@ runner = os.path.join(support, 'fake_nosetest.py')
def keyboardinterrupt(case):
#os.setsid would create a process group so signals sent to the
#parent process will propogates to all children processes
- from tempfile import mktemp
+ from tempfile import mktemp, TemporaryFile
logfile = mktemp()
- process = Popen([sys.executable,runner,os.path.join(support,case),logfile],
preexec_fn=os.setsid, stdout=PIPE, stderr=PIPE, bufsize=-1)
+ tmpStdout = TemporaryFile()
+ tmpStderr = TemporaryFile()
+ process = Popen([sys.executable,runner,os.path.join(support,case),logfile],
+ preexec_fn=os.setsid, stdout=tmpStdout, stderr=tmpStderr)

#wait until logfile is created:
retry=100
@@ -37,8 +40,20 @@ def keyboardinterrupt(case):
if not retry:
raise Exception('Timeout while waiting for log file to be
created by fake_nosetest.py')

+ sleep(0.1)
os.killpg(process.pid, signal.SIGINT)
- return process, logfile
+
+ return (process, tmpStdout, tmpStderr), logfile
+
+def get_stdout_stderr(process):
+ process, tmpStdout, tmpStderr = process
+
+ retcode = process.wait()
+ tmpStdout.seek(0)
+ tmpStderr.seek(0)
+ stdout = tmpStdout.read().decode('utf-8')
+ stderr = tmpStderr.read().decode('utf-8')
+ return stdout, stderr

def get_log_content(logfile):
'''prefix = 'tempfile is: '
@@ -53,9 +68,14 @@ def get_log_content(logfile):

def test_keyboardinterrupt():
process, logfile = keyboardinterrupt('keyboardinterrupt.py')
- stdout, stderr = [s.decode('utf-8') for s in process.communicate(None)]
- print stderr
+ stdout, stderr = get_stdout_stderr(process)
log = get_log_content(logfile)
+ print "---- log ----"
+ print log
+ print "---- captured stdout ----"
+ print stdout
+ print "---- captured stderr ----"
+ print stderr
assert 'setup' in log
assert 'test_timeout' in log
assert 'test_timeout_finished' not in log
@@ -70,9 +90,15 @@ def test_keyboardinterrupt():
def test_keyboardinterrupt_twice():
process, logfile = keyboardinterrupt('keyboardinterrupt_twice.py')
sleep(0.5)
- os.killpg(process.pid, signal.SIGINT)
- stdout, stderr = [s.decode('utf-8') for s in process.communicate(None)]
+ os.killpg(process[0].pid, signal.SIGINT)
+ stdout, stderr = get_stdout_stderr(process)
log = get_log_content(logfile)
+ print "---- log ----"
+ print log
+ print "---- captured stdout ----"
+ print stdout
+ print "---- captured stderr ----"
+ print stderr
assert 'setup' in log
assert 'test_timeout' in log
assert 'test_timeout_finished' not in log

John Szakmeister

unread,
Mar 4, 2013, 5:57:01 AM3/4/13
to nose...@googlegroups.com
On Sun, Mar 3, 2013 at 4:16 PM, John Szakmeister <jo...@szakmeister.net> wrote:
> On Mon, Feb 25, 2013 at 9:16 AM, jason pellerin <jpel...@gmail.com> wrote:
>> I'm using python3.3. from the deadsnakes PPA on ubuntu 12.04.
>>
>> It seems like some kind of output buffering problem to me -- I can also
>> trigger it by read()ing from process.stderr -- if I make the read() amount
>> big enough. "Big enough" varies by python version, what blocks on 3.3 (say,
>> read(3000)) passes on 3.2.
>
> So I was able to reproduce this in my Ubuntu 12.04 VM. It's
> definitely the kind of issue you mention. I used the patch below, and
> have managed to run just the mp keyboard interrupt tests 61 times in a
> row now without an issue. This patch is not ready to incorporate yet.
> Some of it could be structured better, and I want to make sure things
> are getting cleaned up.

I also found that changing ``bufsize=-1`` to ``bufsize=65536`` worked
well too, though I feel like we're likely to encounter the issue again
with this kind of solution. I think using the temporary files is
probably a better long term solution.

I did check to match sure the patch I sent yesterday was cleaning up
things correctly, and that does appear to be the case. I'd still like
to rework one portion of it--I'm returning a tuple which gets assigned
to a "process" variable, which doesn't make much sense. I'd also like
to have the test touch a file to let us know that it's ready to be
killed.

FWIW, I do think there's a small bug here too. I've found a few times
where the pgkill interrupted the test on the sleep call (like it
supposed to), but nose didn't call the teardown routine, or if it did,
it didn't print the final "Ran X of Y tests".

I have quite a lot to do this week and weekend, so it's unlikely I'm
going to get to this again before next week.

-John

John Szakmeister

unread,
Mar 6, 2013, 7:18:27 AM3/6/13
to nose...@googlegroups.com
On Mon, Feb 25, 2013 at 9:16 AM, jason pellerin <jpel...@gmail.com> wrote:
> I'm using python3.3. from the deadsnakes PPA on ubuntu 12.04.
>
> It seems like some kind of output buffering problem to me -- I can also
> trigger it by read()ing from process.stderr -- if I make the read() amount
> big enough. "Big enough" varies by python version, what blocks on 3.3 (say,
> read(3000)) passes on 3.2.

I've tracked it down, and it's a Python bug. I've filed an issue for it:
<http://bugs.python.org/issue17367>

Given that, what would you like to do here? Disable that test under
Python 3.3 for now? Looking at Python's history in Mercurial, it
appears that this bug has been there for a while. :-(

-John

John Szakmeister

unread,
Mar 6, 2013, 7:46:52 AM3/6/13
to nose...@googlegroups.com
On Wed, Mar 6, 2013 at 7:18 AM, John Szakmeister <jo...@szakmeister.net> wrote:
> On Mon, Feb 25, 2013 at 9:16 AM, jason pellerin <jpel...@gmail.com> wrote:
>> I'm using python3.3. from the deadsnakes PPA on ubuntu 12.04.
>>
>> It seems like some kind of output buffering problem to me -- I can also
>> trigger it by read()ing from process.stderr -- if I make the read() amount
>> big enough. "Big enough" varies by python version, what blocks on 3.3 (say,
>> read(3000)) passes on 3.2.
>
> I've tracked it down, and it's a Python bug. I've filed an issue for it:
> <http://bugs.python.org/issue17367>

I spoke too soon. I changed something in core Python and saw a
difference in behavior. I'm gonna spend more time on this, but it
feels like a Python bug. There's no reason p.communicate() should not
be returning, unless our subprocess is hanging on.

I suppose that could be part of the problem too. Perhaps the
processes aren't dying like they should?

-John

jason pellerin

unread,
Mar 6, 2013, 9:04:27 AM3/6/13
to nose...@googlegroups.com
It's possible that they aren't exiting -- in looking into it here, I added a block before process.communicate():

if process.poll() is None:
        print "It didn't stop"
        stderr = process.stderr.read(1800).decode('utf-8')
        process.terminate()
        process.kill()

which on python 3.3 fires every time, unclear whether it does every time under other python versions. By adjusting the size of the read, you can make 3.2 and 3.3 freeze on read() -- 2.x doesn't seem to have that problem.

JP


-John

John Szakmeister

unread,
Mar 7, 2013, 6:15:58 AM3/7/13
to nose...@googlegroups.com
On Wed, Mar 6, 2013 at 9:04 AM, jason pellerin <jpel...@gmail.com> wrote:
> It's possible that they aren't exiting -- in looking into it here, I added a
> block before process.communicate():
>
> if process.poll() is None:
> print "It didn't stop"
> stderr = process.stderr.read(1800).decode('utf-8')
> process.terminate()
> process.kill()
>
> which on python 3.3 fires every time, unclear whether it does every time
> under other python versions. By adjusting the size of the read, you can make
> 3.2 and 3.3 freeze on read() -- 2.x doesn't seem to have that problem.

I could see it freezing if data is being emitted to stdout and becomes
blocked because you are reading it off. What I can't understand is
why we'd be hung up in p.communicate(). I'm hoping to get some more
tracing in place. I ran out of time yesterday. I have something else
I need to work on for the weekend, but hopefully I can get a little
more debugging time in along the way. Of course, my VM is now acting
up too... Nothing is ever easy. :-)

-John

John Szakmeister

unread,
Mar 7, 2013, 11:29:37 AM3/7/13
to Steven Jenkins, nose...@googlegroups.com
On Thu, Mar 7, 2013 at 11:12 AM, Steven Jenkins
<steven....@gmail.com> wrote:
>
> Is there an ETA for the 1.3 release? #554 is needed for my users, and I'm
> trying to figure out if I should do a locally-patched version against 1.2.1
> or wait for 1.3, as 1.2.1 is broken for them in the meantime. If this is
> better asked in nose-user, then I apologize, and I'll be happy to ask it
> there instead.

Unfortunately no. Both Jason are pretty tied up, and we've been
chasing a bug down under Python 3.3. So for now, I'd say do a locally
patched version.

Sorry I don't have a better answer!

-John

John Szakmeister

unread,
Mar 18, 2013, 6:46:00 AM3/18/13
to nose...@googlegroups.com
On Wed, Mar 6, 2013 at 9:04 AM, jason pellerin <jpel...@gmail.com> wrote:
> It's possible that they aren't exiting -- in looking into it here, I added a
> block before process.communicate():
>
> if process.poll() is None:
> print "It didn't stop"
> stderr = process.stderr.read(1800).decode('utf-8')
> process.terminate()
> process.kill()
>
> which on python 3.3 fires every time, unclear whether it does every time
> under other python versions. By adjusting the size of the read, you can make
> 3.2 and 3.3 freeze on read() -- 2.x doesn't seem to have that problem.

Unfortunately, the VM I was using decided that it was going to fail to
login. I created a new VM and now I'm unable to generate the same
problem. I can't win. :-/

I was still seeing random failures from test_keyboardinterrupt though.
I can't say exactly what the failure mode was because the exception
handling obscures the actual path, but I believe that we were
sometimes signaling the child process before it was in the desired
state. With the original test, I had an initial long run, and then it
would consistently fall over after a couple of trials. I just put in
a PR that works much better. Instead of using a timing (sleep) based
mechanism, I introduced the concept of kill files instead. With this
technique, the file is created when the support test is in the desired
state. When test_keyboardinterrupt sees the kill file, it'll remove
it and signal the child process. This proves to be a much better
mechanism for the test. Since getting this in place, I've seen zero
failures. I currently have my machine running the test repeatedly, as
fast as it can. I'm up over several thousand runs now. The pull
request is here:
<https://github.com/nose-devs/nose/pull/659>

Jason: I'd be interested if it helps your issue. It's not meant to,
but I imagine killing the child at the right time allows it to die
more gracefully, and may prevent some of the issues you were seeing.

-John

jason pellerin

unread,
Apr 7, 2013, 9:55:56 AM4/7/13
to nose...@googlegroups.com
Ok. I think we're ready to cut this release, finally. Is the changelog all up to date?



-John

John Szakmeister

unread,
Apr 7, 2013, 10:26:15 AM4/7/13
to nose...@googlegroups.com
On Sun, Apr 7, 2013 at 9:55 AM, jason pellerin <jpel...@gmail.com> wrote:
> Ok. I think we're ready to cut this release, finally. Is the changelog all
> up to date?

Unfortunately, no. I was hoping to get some time to get it there
today, but I'm not sure if I can. 'git log --first-parent
8fa890fa632304bf69924ede9fac94f1a5414387..' should give a list of
changes since the last CHANGELOG update, if you wanted to take a stab
at it.

-John

John Szakmeister

unread,
Apr 7, 2013, 3:54:27 PM4/7/13
to nose...@googlegroups.com
I've got one small addition to make to the CHANGELOG as soon as a
contributor gets back to me with their proper name. Once that goes
in, I think we're ready.

-John

John Szakmeister

unread,
Apr 7, 2013, 4:21:52 PM4/7/13
to nose...@googlegroups.com
On Sun, Apr 7, 2013 at 3:54 PM, John Szakmeister <jo...@szakmeister.net> wrote:
[snip]
> I've got one small addition to make to the CHANGELOG as soon as a
> contributor gets back to me with their proper name. Once that goes
> in, I think we're ready.

And we're good. Roll whenever you're ready Jason!

-John

jason pellerin

unread,
Apr 8, 2013, 9:15:54 AM4/8/13
to nose...@googlegroups.com
And it's done! Finally! Many thanks.

JP



-John

John Szakmeister

unread,
Apr 8, 2013, 9:22:38 AM4/8/13
to nose...@googlegroups.com
On Mon, Apr 8, 2013 at 9:15 AM, jason pellerin <jpel...@gmail.com> wrote:
>
> And it's done! Finally! Many thanks.

Awesome! Thanks to everyone that contributed along the way!

-John
Reply all
Reply to author
Forward
0 new messages