[Boost-users] What's happened to Ryppl?

8 views
Skip to first unread message

Robert Jones

unread,
Jan 11, 2011, 5:36:56 AM1/11/11
to boost...@lists.boost.org

Dave Abrahams

unread,
Jan 14, 2011, 1:28:26 PM1/14/11
to boost...@lists.boost.org, Ryppl Developers, John Wiegley
At Tue, 11 Jan 2011 10:36:56 +0000,
Robert Jones wrote:
> What's happened to Ryppl?

Fair enough; now that I'm digging myself out of the new years'
pile-up, it's time for a status update. [Replies will go to the
ryppl-dev list by default; see
http://groups.google.com/group/ryppl-dev/subscribe for information on
posting]

---------

There are basically three parallel efforts in Ryppl:

I. Modularize boost

* Eric Niebler produced a script to shuffle Boost into separate Git
repositories. He has been maintaining that script to keep up with
minor changes in the Boost SVN, but doesn't have time at the moment
to do much more. Fortunately, I don't think there's much more to be
done on that.

* John Wiegley has developed a comprehensive strategy for rewriting
SVN history to preserve as much information as possible about the
evolution of everything, and he's working on implementing that. I
expect results soon.

II. CMake-ify the modularized Boost

* A bunch of work has been done on this, but we never got to the point
that *everything* built, installed, and passed the tests.

* Denis Arnaud has been maintaining a (non-modularized) Boost-CMake
distribution; see
https://groups.google.com/group/ryppl-dev/msg/b619c95964b0e003?hl=en
and others in that thread for details.

These two efforts can be merged; I'm sure of it.

III. Dependency management

* I have been working on libsat-solver. Sat-solver is the underlying
technology of the zypp installer used by OpenSuse, and it contains
all the important bits needed by any installation and
dependency-management system and has the right licensing. It's a
CMake-based project.

* These are the jobs:

1. Porting to Mac. I a good chunk of this job
(http://gitorious.org/opensuse/sat-solver/merge_requests/2 ---
including submitting some CMake patches upstream!) but there's
still more to do. Since sat-solver includes all kinds of ruby
bindings and whatnot that we don't really need for this project,
these parts probably need to be ported in order for the changes
to be accepted upstream.

2. Replace the example program's use of RPM by Git.

3. Port to Windows. Mateusz Loskot made a bunch of progress on this
(http://groups.google.com/group/ryppl-dev/browse_thread/thread/7292998aadb04b91)
but it's not yet complete.

-------------

Our priorities, in descending order, are:

1. Set up buildbots for modularized boost so we can test work on the
CMake-ification. This step will also serve as a proof-of-concept
for modularized Boost's testing infrastructure

2. Complete the CMake-ification

3. History-preserving modularization: it's an estimate of course, but
I expect John to have handled that within a few weeks.

4. Do the dependency management part.

As usual, we welcome your assistance, participation, and interest! If
there's any part of this that you'd like to work on, please speak up.

Regards,

--
Dave Abrahams
BoostPro Computing
http://www.boostpro.com

_______________________________________________
Boost-users mailing list
Boost...@lists.boost.org
http://lists.boost.org/mailman/listinfo.cgi/boost-users

John Wiegley

unread,
Jan 14, 2011, 2:43:08 PM1/14/11
to boost...@lists.boost.org
Dave Abrahams <da...@boostpro.com> writes:

> * John Wiegley has developed a comprehensive strategy for rewriting
> SVN history to preserve as much information as possible about the
> evolution of everything, and he's working on implementing that. I
> expect results soon.

I wanted to chime in here and say hello; and that yes, I'm working on the
modularization project with input from both Dave and Eric. I have the
beginnings of the migration script up here:

https://github.com/jwiegley/boost-migrate

It's very rough right now, as I'm still exploring the completeness of the
Subversion 'dump' format, and how to use Git plumbing to avoid the migration
process taking days upon days to complete.

I fully expect that when completed, the migration script will not only exactly
replicate the existing Subversion repository, revision for commit (with some
revisions being ommitted if they only change properties/directories, and
others being split if their transactions affect multiple branches
simultaneously), but it will also modularize that history at the same time,
preserving as much relevant history within each module as possible.

If I had full-time to work on this, I'd expect the script to be completed and
within 3-4 days. Since my present workload gives me only an hour or so each
day to work on it, it may be a couple weeks before I can invite sincere
criticism. Until then, feel free to add yourself as a watcher on the project,
and I'll post changes to it as I progress.

John

John Wiegley

unread,
Jan 22, 2011, 8:01:04 AM1/22/11
to Ryppl Developers, boost...@lists.boost.org
On Jan 14, 2011, at 2:43 PM, John Wiegley wrote:

> I wanted to chime in here and say hello; and that yes, I'm working on the
> modularization project with input from both Dave and Eric. I have the
> beginnings of the migration script up here:
>
> https://github.com/jwiegley/boost-migrate
>
> It's very rough right now, as I'm still exploring the completeness of the
> Subversion 'dump' format, and how to use Git plumbing to avoid the migration
> process taking days upon days to complete.

Quick status update: Direct conversion from a Subversion flat-filesystem to
an identical Git flat-filesystem now works. It takes 8 GB of RAM and a lot
of time to run, but that can be optimized fairly easily.

Next step is to read in a corrected branches.txt file (this is currently
generated based on hueristics by the 'branches' subcommand), and then use
that information to output a branchified object hierarchy instead of a flat
one. This step should be very easy to implement.

After that is reading Eric's manifest.txt file and using the information
to produce multiple submodules during the repository conversion process.
This step is quite a bit trickery, and will require a few days to get
right.

Dave Abrahams

unread,
Jan 24, 2011, 11:15:24 AM1/24/11
to boost...@lists.boost.org, Ryppl Developers
At Sat, 22 Jan 2011 08:01:04 -0500,

John Wiegley wrote:
>
> On Jan 14, 2011, at 2:43 PM, John Wiegley wrote:
>
> > I wanted to chime in here and say hello; and that yes, I'm working on the
> > modularization project with input from both Dave and Eric. I have the
> > beginnings of the migration script up here:
> >
> > https://github.com/jwiegley/boost-migrate
> >
> > It's very rough right now, as I'm still exploring the completeness of the
> > Subversion 'dump' format, and how to use Git plumbing to avoid the migration
> > process taking days upon days to complete.
>
> Quick status update: Direct conversion from a Subversion flat-filesystem to
> an identical Git flat-filesystem now works.

So now you have a linear sequence of commits that reflect the state of
the entire SVN tree?

> It takes 8 GB of RAM and a lot of time to run, but that can be
> optimized fairly easily.

And incrementalized, by any chance?

> Next step is to read in a corrected branches.txt file (this is currently
> generated based on hueristics by the 'branches' subcommand), and then use
> that information to output a branchified object hierarchy instead of a flat
> one. This step should be very easy to implement.
>
> After that is reading Eric's manifest.txt file and using the information
> to produce multiple submodules during the repository conversion process.
> This step is quite a bit trickery, and will require a few days to get
> right.

Sounds like you're making great progress; keep it up!

--
Dave Abrahams
BoostPro Computing
http://www.boostpro.com

_______________________________________________

John Wiegley

unread,
Jan 24, 2011, 12:41:27 PM1/24/11
to rypp...@googlegroups.com, boost...@lists.boost.org
Dave Abrahams <da...@boostpro.com> writes:

>> Quick status update: Direct conversion from a Subversion flat-filesystem to
>> an identical Git flat-filesystem now works.
>
> So now you have a linear sequence of commits that reflect the state of
> the entire SVN tree?

Exactly. I'm 98% of the way toward a branchified sequence today.

>> It takes 8 GB of RAM and a lot of time to run, but that can be
>> optimized fairly easily.
>
> And incrementalized, by any chance?

Not yet; still just getting the basics to work.

John

Dave Abrahams

unread,
Jan 24, 2011, 9:05:47 PM1/24/11
to boost...@lists.boost.org, rypp...@googlegroups.com
At Mon, 24 Jan 2011 12:41:27 -0500,

John Wiegley wrote:
>
> Dave Abrahams <da...@boostpro.com> writes:
>
> >> Quick status update: Direct conversion from a Subversion flat-filesystem to
> >> an identical Git flat-filesystem now works.
> >
> > So now you have a linear sequence of commits that reflect the state of
> > the entire SVN tree?
>
> Exactly. I'm 98% of the way toward a branchified sequence today.

Awesome!

> >> It takes 8 GB of RAM and a lot of time to run, but that can be
> >> optimized fairly easily.
> >
> > And incrementalized, by any chance?
>
> Not yet; still just getting the basics to work.

Check.

--
Dave Abrahams
BoostPro Computing
http://www.boostpro.com

_______________________________________________

John Wiegley

unread,
Jan 24, 2011, 10:36:02 PM1/24/11
to rypp...@googlegroups.com, boost...@lists.boost.org
Dave Abrahams <da...@boostpro.com> writes:

>> Exactly. I'm 98% of the way toward a branchified sequence today.
>
> Awesome!

OK, branchification is working! All that's left is submodulization as part of
the same run.

This will actually not be very difficult, just time consuming to run. I'll
use Eric's manifest.txt file, plus 'git log --follow -C --find-copies-harder'
on each element of each submodule, run against the flat history. The man page
says this is an O(N^2) operation -- where N is very large in Boost's case --
so I may end up having to do some pruning to keep it from getting out of hand.

Actually, the speed of this script is already too slow, so I'm rewriting it in
C++ today both for the native speed increase (10x so far, for dump-file
parsing), and because it lets me use libgit2 (https://github.com/libgit2) to
create Git objects directly, rather than shelling out to git-hash-object and
git-mktree over a million times. That alone takes over 15 hours to do on my
Mac Pro. Don't even ask how long the git gc takes to run! (It's longer).

If anyone wonders whether my process -- which works for any Subversion repo,
btw, not just Boost -- preserves more information than plain git-svn: consider
that my branchified Git has just over one million Git objects in it, while the
boost-svn repository on ryppl has only 593026 right now. That means over 40%
of the repository's objects got dropped on the cutting floor by git-svn's
hueristics.

John

Dean Michael Berris

unread,
Jan 24, 2011, 11:49:16 PM1/24/11
to boost...@lists.boost.org, rypp...@googlegroups.com
On Tue, Jan 25, 2011 at 11:36 AM, John Wiegley <jo...@boostpro.com> wrote:
> Dave Abrahams <da...@boostpro.com> writes:
>
>>> Exactly.  I'm 98% of the way toward a branchified sequence today.
>>
>> Awesome!
>
> OK, branchification is working!  All that's left is submodulization as part of
> the same run.
>

Way cool!

>
> Actually, the speed of this script is already too slow, so I'm rewriting it in
> C++ today both for the native speed increase (10x so far, for dump-file
> parsing), and because it lets me use libgit2 (https://github.com/libgit2) to
> create Git objects directly, rather than shelling out to git-hash-object and
> git-mktree over a million times.  That alone takes over 15 hours to do on my
> Mac Pro.  Don't even ask how long the git gc takes to run!  (It's longer).
>

Interesting. I'd love to see the C++ version too. :)

> If anyone wonders whether my process -- which works for any Subversion repo,
> btw, not just Boost -- preserves more information than plain git-svn: consider
> that my branchified Git has just over one million Git objects in it, while the
> boost-svn repository on ryppl has only 593026 right now.  That means over 40%
> of the repository's objects got dropped on the cutting floor by git-svn's
> hueristics.
>

Coolness! :D So now I think it's a matter of convincing the other
peeps that moving from Subversion to Git is actually a worthwhile
effort. ;)

--
Dean Michael Berris
about.me/deanberris

Eric Niebler

unread,
Jan 25, 2011, 12:05:46 AM1/25/11
to rypp...@googlegroups.com, boost...@lists.boost.org
On 1/25/2011 11:49 AM, Dean Michael Berris wrote:
> On Tue, Jan 25, 2011 at 11:36 AM, John Wiegley <jo...@boostpro.com> wrote:
>> OK, branchification is working! All that's left is submodulization as part of
>> the same run.
<snip>

>
> Coolness! :D So now I think it's a matter of convincing the other
> peeps that moving from Subversion to Git is actually a worthwhile
> effort. ;)

A lot of work remains --- that is, if it's also our intention to
modularize boost and have a functioning cmake build system, too. At
least, modularization seems like it would be a good thing to do at the
same time. And nobody is working on a bjam system for modularized boost.

<idle speculation>
Is it feasible to have both git and svn development going on
simultaneously? Two-way synchronization from non-modularized svn boost
to modularized git boost? Is that pure insanity?
</idle speculation>

--
Eric Niebler
BoostPro Computing
http://www.boostpro.com

signature.asc

Joel Falcou

unread,
Jan 25, 2011, 12:57:46 AM1/25/11
to boost...@lists.boost.org
On 25/01/11 06:05, Eric Niebler wrote:
> <idle speculation>
> Is it feasible to have both git and svn development going on
> simultaneously? Two-way synchronization from non-modularized svn boost
> to modularized git boost? Is that pure insanity?
> </idle speculation>
I was about to ask the same as I wanted to make stuff happens in MPL in
a git environnement and later merge the changes into the mainstream SVM.
But if everythign can be done without specific hoops, it'll be even better.

Props to all of you guys involved in this effort :)

Mateusz Loskot

unread,
Jan 25, 2011, 5:24:01 AM1/25/11
to boost...@lists.boost.org
On 25/01/11 05:05, Eric Niebler wrote:
> On 1/25/2011 11:49 AM, Dean Michael Berris wrote:
>> On Tue, Jan 25, 2011 at 11:36 AM, John Wiegley<jo...@boostpro.com> wrote:
>>> OK, branchification is working! All that's left is submodulization as part of
>>> the same run.
> <snip>
>>
>> Coolness! :D So now I think it's a matter of convincing the other
>> peeps that moving from Subversion to Git is actually a worthwhile
>> effort. ;)
>
> A lot of work remains ---

Regarding one of bit still under construction, it is sat-solver

https://github.com/mloskot/sat-solver

and if it's still wanted, I am going to continue porting it to Visual
C++ in ~2 weeks. If anyone would like to join the effort, please do!

Best regards,
--
Mateusz Loskot, http://mateusz.loskot.net
Charter Member of OSGeo, http://osgeo.org
Member of ACCU, http://accu.org

Dave Abrahams

unread,
Jan 25, 2011, 11:58:13 AM1/25/11
to boost...@lists.boost.org, rypp...@googlegroups.com
At Mon, 24 Jan 2011 22:36:02 -0500,

John Wiegley wrote:
>
> Dave Abrahams <da...@boostpro.com> writes:
>
> >> Exactly. I'm 98% of the way toward a branchified sequence today.
> >
> > Awesome!
>
> OK, branchification is working! All that's left is submodulization as part of
> the same run.

What do you mean by "submodulization?" I don't think we'll end up
using Git submodules much in the end, but I can imagine why you'd want
to do that now.

> This will actually not be very difficult, just time consuming to run. I'll
> use Eric's manifest.txt file, plus 'git log --follow -C --find-copies-harder'
> on each element of each submodule, run against the flat history. The man page
> says this is an O(N^2) operation -- where N is very large in Boost's case --
> so I may end up having to do some pruning to keep it from getting out of hand.
>
> Actually, the speed of this script is already too slow, so I'm rewriting it in
> C++ today both for the native speed increase (10x so far, for dump-file
> parsing), and because it lets me use libgit2 (https://github.com/libgit2) to
> create Git objects directly, rather than shelling out to git-hash-object and
> git-mktree over a million times. That alone takes over 15 hours to do on my
> Mac Pro. Don't even ask how long the git gc takes to run! (It's longer).
>
> If anyone wonders whether my process -- which works for any Subversion repo,
> btw, not just Boost -- preserves more information than plain git-svn: consider
> that my branchified Git has just over one million Git objects in it, while the
> boost-svn repository on ryppl has only 593026 right now. That means over 40%
> of the repository's objects got dropped on the cutting floor by git-svn's
> hueristics.

Yay, John! :-)

--
Dave Abrahams
BoostPro Computing
http://www.boostpro.com

_______________________________________________

Dave Abrahams

unread,
Jan 25, 2011, 12:00:21 PM1/25/11
to rypp...@googlegroups.com, boost...@lists.boost.org
At Tue, 25 Jan 2011 12:05:46 +0700,

Eric Niebler wrote:
>
> <idle speculation>
> Is it feasible to have both git and svn development going on
> simultaneously? Two-way synchronization from non-modularized svn boost
> to modularized git boost? Is that pure insanity?
> </idle speculation>

Probably not *pure* insanity, but also perhaps not worth the trouble, IMO.

--
Dave Abrahams
BoostPro Computing
http://www.boostpro.com

_______________________________________________

Robert Ramey

unread,
Jan 25, 2011, 12:06:19 PM1/25/11
to boost...@lists.boost.org
Eric Niebler wrote:

A lot of work remains --- that is, if it's also our intention to
modularize boost and have a functioning cmake build system, too.

At least, modularization seems like it would be a good thing to do at the
same time. And nobody is working on a bjam system for modularized boost.

*** I don't believe there is any reason to couple modularization of boost to
any particular build system.
I use bjam to build and test the serialization library on my local machine.
I just set the current directory to libs/serialization/test and run bjam
with some switches. This builds/updates the prerequisites to the
serialization library, builds the serialization library, then builds and
runs the tests. (and in my case builds a table of test results since i use
library status.sh). I'm would expect that I could the same with CTest.

The key issue is that the build system permit the building of just one
"module" (and its necessary prerequisites). Bjam (and hopefully ctest) does
this now. Building of "all" of boost is just the building of each module.
Building of some alternative "distribution" is just the building of each of
the component modules (and their prequisites). There isn't even any reason
why each module has to use the same build system.

<idle speculation>
Is it feasible to have both git and svn development going on
simultaneously? Two-way synchronization from non-modularized svn boost
to modularized git boost? Is that pure insanity?
</idle speculation>

*** By the same token, a "modularized" boost needn't require that all
modules use the same source control system. Ideally, the build for each
module would use checkout/update the local copy of the module according to
the "configuration file" (...v2 or ctest.?).

Once the procedure for for building a module is moved to the module rather
than invoked "from the top", modularization can proceed incrementally.

Robert Ramey

--

Dave Abrahams

unread,
Jan 25, 2011, 12:06:52 PM1/25/11
to boost...@lists.boost.org
At Tue, 25 Jan 2011 10:24:01 +0000,

Mateusz Loskot wrote:
>
> On 25/01/11 05:05, Eric Niebler wrote:
> > On 1/25/2011 11:49 AM, Dean Michael Berris wrote:
> >> On Tue, Jan 25, 2011 at 11:36 AM, John Wiegley<jo...@boostpro.com> wrote:
> >>> OK, branchification is working! All that's left is submodulization as part of
> >>> the same run.
> > <snip>
> >>
> >> Coolness! :D So now I think it's a matter of convincing the other
> >> peeps that moving from Subversion to Git is actually a worthwhile
> >> effort. ;)
> >
> > A lot of work remains ---
>
> Regarding one of bit still under construction, it is sat-solver
>
> https://github.com/mloskot/sat-solver
>
> and if it's still wanted, I am going to continue porting it to Visual
> C++ in ~2 weeks. If anyone would like to join the effort, please do!

Dude, you rock!! I'm so glad to hear that you're intending to do
that. I will be glad to work with you on it.

Now we have one person/organization *other than me* taking primary
responsibility for each major part of the project:

1. Modularization - John W

2. CMake support - Kitware/Marcus Hanwell (note: one of Kitware's
clients is actually paying for their work on this)

3. Metadata and dependency resolution - Mateusz Loskot

which leaves me free to do:

4. Automated testing

5. Project coordination

If anyone wants to take #4 off my hands that'd be awesome :-)

--
Dave Abrahams
BoostPro Computing
http://www.boostpro.com

_______________________________________________

Dave Abrahams

unread,
Jan 25, 2011, 12:54:13 PM1/25/11
to boost...@lists.boost.org, boost

[Redirecting replies to the boost developers' list; we should have
been there nearly from the beginning. Anyone who wants to see the
earlier parts of the thread should look to
http://groups.google.com/group/boostusers/browse_thread/thread/6d0d01eb3cac4abf]

Hi Robert,

Could you please use standard quoting? I am having trouble separating
the parts you wrote below from what Eric wrote.

Thanks,
Dave

At Tue, 25 Jan 2011 09:06:19 -0800,

--
Dave Abrahams
BoostPro Computing
http://www.boostpro.com

_______________________________________________

Beman Dawes

unread,
Jan 27, 2011, 12:52:54 PM1/27/11
to ryppl-dev, boost-users
On Tue, Jan 25, 2011 at 12:00 PM, Dave Abrahams <da...@boostpro.com> wrote:
> At Tue, 25 Jan 2011 12:05:46 +0700,
> Eric Niebler wrote:
>>
>> <idle speculation>
>> Is it feasible to have both git and svn development going on
>> simultaneously? Two-way synchronization from non-modularized svn boost
>> to modularized git boost? Is that pure insanity?
>> </idle speculation>
>
> Probably not *pure* insanity, but also perhaps not worth the trouble, IMO.

Still, doing a "big bang" conversion to Git all at one time is more
than a notion.

Independent of modularization, ryppl, or anything else, is it time to
start a discussion on the main list about moving to Git?

--Beman

Edward Diener

unread,
Jan 27, 2011, 1:12:58 PM1/27/11
to boost...@lists.boost.org
On 1/27/2011 12:52 PM, Beman Dawes wrote:
> On Tue, Jan 25, 2011 at 12:00 PM, Dave Abrahams<da...@boostpro.com> wrote:
>> At Tue, 25 Jan 2011 12:05:46 +0700,
>> Eric Niebler wrote:
>>>
>>> <idle speculation>
>>> Is it feasible to have both git and svn development going on
>>> simultaneously? Two-way synchronization from non-modularized svn boost
>>> to modularized git boost? Is that pure insanity?
>>> </idle speculation>
>>
>> Probably not *pure* insanity, but also perhaps not worth the trouble, IMO.
>
> Still, doing a "big bang" conversion to Git all at one time is more
> than a notion.
>
> Independent of modularization, ryppl, or anything else, is it time to
> start a discussion on the main list about moving to Git?

I hope such a discussion entails a very strong justification of why Git
is better than Subversion. I still do not buy it, and only find Git more
complicated and harder to use than Subversion with little advantage. I
fear very much an "emperor's new clothes" situation where everyone is
jumping on a bandwagon, because it is the latest thing to do, but no one
is bothering to explain why this latest thing has any value to Boost.

Robert Ramey

unread,
Jan 27, 2011, 1:26:21 PM1/27/11
to boost...@lists.boost.org
Beman Dawes wrote:
> On Tue, Jan 25, 2011 at 12:00 PM, Dave Abrahams <da...@boostpro.com>
> wrote:
>> At Tue, 25 Jan 2011 12:05:46 +0700,
>> Eric Niebler wrote:
>>>
>>> <idle speculation>
>>> Is it feasible to have both git and svn development going on
>>> simultaneously? Two-way synchronization from non-modularized svn
>>> boost to modularized git boost? Is that pure insanity?
>>> </idle speculation>
>>
>> Probably not *pure* insanity, but also perhaps not worth the
>> trouble, IMO.
>
> Still, doing a "big bang" conversion to Git all at one time is more
> than a notion.
>
> Independent of modularization, ryppl, or anything else, is it time to
> start a discussion on the main list about moving to Git?

To me, this illustrates a fundamental problem. If the issue of
modularization were addressed, there would be no requirement
that all libraries use the same version control system. That is,
change to a different version control system would occur
one library at time.

Same can be said for the build system.

The only coupling really required between libraries is

a) namespace coordination
b) directory structure - to some extent at least at the top levels
c) quality standards
i) testing
ii) platform coverage
iii) documentation requirements

If coupling is required somewhere else, it's an error that
is holding us back.

Robert Ramey

Anthony Foiani

unread,
Jan 28, 2011, 2:12:24 AM1/28/11
to boost...@lists.boost.org

Edward --

Edward Diener <eldi...@tropicsoft.com> writes:
> I hope such a discussion entails a very strong justification of why
> Git is better than Subversion. I still do not buy it, and only find
> Git more complicated and harder to use than Subversion with little

> advantage. [...], but no one is bothering to explain why this


> latest thing has any value to Boost.

For my own development efforts, I've found Git to be an improvement
over Subversion in the following ways:

1. Detached development.

The ability to do incremental check-ins without requiring a network
connection is a huge win for me.

2. Data backup.

If every developer (more, every developer's computer) has a full
copy of the history on it, that is more distributed and easier to
obtain than making sure you have transaction-perfect replication of
your master SVN repository. (Or, at least, it was for me.)

3. Experimentation.

In my experience, branching is cheaper and much lighter-weight in
Git than in SVN.

I do sympathize with the "harder than svn" complaint; I find it so
myself. But having been left out in the cold a few times by having
only SVN, I will certainly run my next project with git rather than
svn.

Also, it's not clear that Boost has the same level of contributor
fan-in that is git's truest strength.

Regards,
Tony

Robert Jones

unread,
Jan 28, 2011, 2:58:02 AM1/28/11
to boost...@lists.boost.org
On Thu, Jan 27, 2011 at 5:52 PM, Beman Dawes <bda...@acm.org> wrote:

Independent of modularization, ryppl, or anything else, is it time to
start a discussion on the main list about moving to Git?


I'm already a convert to DVCS's, so in principle migration to Git seems like
a good thing to me, but on a practical level I find I can't run through the Ryppl
'Getting Started' guide because our corporate internet gateways block or
restrict the git protocol. Dropping back to http only works for the first
step, presumably because the submodule download reverts to the git
protocol.

I recognise of course that this is local issue for me, but I imagine I
will not be alone.

- Rob.

Anthony Williams

unread,
Jan 28, 2011, 3:24:48 AM1/28/11
to boost...@lists.boost.org
Edward Diener <eldi...@tropicsoft.com> writes:

> On 1/27/2011 12:52 PM, Beman Dawes wrote:
>> Independent of modularization, ryppl, or anything else, is it time to
>> start a discussion on the main list about moving to Git?
>
> I hope such a discussion entails a very strong justification of why
> Git is better than Subversion. I still do not buy it, and only find
> Git more complicated and harder to use than Subversion with little
> advantage. I fear very much an "emperor's new clothes" situation where
> everyone is jumping on a bandwagon, because it is the latest thing to
> do, but no one is bothering to explain why this latest thing has any
> value to Boost.

Indeed. Also, why git rather than another DVCS such as Mercurial or
bazaar? Personally, I find Mercurial much easier to use than git, and it
has the same major advantages (which are essentially common to all DVCS
systems).

Also, Mercurial works better on Windows than git does in my experience
--- the git port for Windows is relatively recent, whereas Mercurial has
supported Windows for a while. Since many of the boost developers use
Windows I would have thought this was an important consideration. I
haven't any personal experience of bazaar, so don't know how it fares in
this regard.

The chief advantage of a DVCS over subversion is that you can do local
development with full version control (including history) whilst
offline, and then push/pull when online. Also, you can do incremental
local commits, so you have the advantage of VC, without pushing
unfinished changes to the main repository. Branching and merging tends
to be easier too.

Anthony
--
Author of C++ Concurrency in Action http://www.stdthread.co.uk/book/
just::thread C++0x thread library http://www.stdthread.co.uk
Just Software Solutions Ltd http://www.justsoftwaresolutions.co.uk
15 Carrallack Mews, St Just, Cornwall, TR19 7UL, UK. Company No. 5478976

Dean Michael Berris

unread,
Jan 28, 2011, 4:06:04 AM1/28/11
to boost...@lists.boost.org
On Fri, Jan 28, 2011 at 4:24 PM, Anthony Williams <antho...@gmail.com> wrote:
> Edward Diener <eldi...@tropicsoft.com> writes:
>
>> On 1/27/2011 12:52 PM, Beman Dawes wrote:
>>> Independent of modularization, ryppl, or anything else, is it time to
>>> start a discussion on the main list about moving to Git?
>>
>> I hope such a discussion entails a very strong justification of why
>> Git is better than Subversion. I still do not buy it, and only find
>> Git more complicated and harder to use than Subversion with little
>> advantage. I fear very much an "emperor's new clothes" situation where
>> everyone is jumping on a bandwagon, because it is the latest thing to
>> do, but no one is bothering to explain why this latest thing has any
>> value to Boost.
>
> Indeed. Also, why git rather than another DVCS such as Mercurial or
> bazaar? Personally, I find Mercurial much easier to use than git, and it
> has the same major advantages (which are essentially common to all DVCS
> systems).
>

I have to be honest here and say up front that I have no idea what the
features of mercurial are, so I have some questions with it in
particular:

1. Does it allow for integrating GnuPG signatures in the commit
messages/history? The popular way for certifying that something is
"official" or "is signed off on by <insert maintainer here>" is
through GnuPG PKI. This is what makes the Linux kernel dev
organization more like a self-organizing matter.

2. Does it allow for compacting and local compression of assets? Git
has a rich set of tools for compressing and dealing with local
repositories. It also has a very efficient way of preserving objects
across branches and what not.

3. Does mercurial work in "email" mode? Git has a way of submitting
patches via email -- and have the same email read-in by git and parsed
as an actual "merge". This is convenient for discussing patches in the
mailing list and preserving the original message/discussion. This
gives people a chance to publicly review the changes and import the
same changeset from the same email message.

4. How does mercurial deal with forks? In Git a repository is
automatically a fork of the source repository. I don't know whether
every mercurial repo is the same as a Git repo though -- meaning
whether the same repository can be exposed to a number of protocols
and dealt with like any other Git repo (push/pull/merge/compact, etc.)

> Also, Mercurial works better on Windows than git does in my experience
> --- the git port for Windows is relatively recent, whereas Mercurial has
> supported Windows for a while. Since many of the boost developers use
> Windows I would have thought this was an important consideration. I
> haven't any personal experience of bazaar, so don't know how it fares in
> this regard.
>

I've used Msysgit for the most part, and it works very well --
actually, works the same in Linux as it does in Windows. Are we
talking about the same Windows port of Git?

> The chief advantage of a DVCS over subversion is that you can do local
> development with full version control (including history) whilst
> offline, and then push/pull when online. Also, you can do incremental
> local commits, so you have the advantage of VC, without pushing
> unfinished changes to the main repository. Branching and merging tends
> to be easier too.
>

+1

--
Dean Michael Berris
about.me/deanberris

Mateusz Loskot

unread,
Jan 28, 2011, 4:24:11 AM1/28/11
to boost...@lists.boost.org

You are not alone Robert.
I think HTTPS *and* at least read-only access through HTTP is a must.

Best regards,
--
Mateusz Loskot, http://mateusz.loskot.net
Charter Member of OSGeo, http://osgeo.org
Member of ACCU, http://accu.org

Dean Michael Berris

unread,
Jan 28, 2011, 4:26:58 AM1/28/11
to boost...@lists.boost.org
On Fri, Jan 28, 2011 at 5:24 PM, Mateusz Loskot <mat...@loskot.net> wrote:
> On 28/01/11 07:58, Robert Jones wrote:
>>
>> I'm already a convert to DVCS's, so in principle migration to Git seems
>> like
>> a good thing to me, but on a practical level I find I can't run through
>> the Ryppl
>> 'Getting Started' guide because our corporate internet gateways block or
>> restrict the git protocol. Dropping back to http only works for the first
>> step, presumably because the submodule download reverts to the git
>> protocol.
>>
>> I recognise of course that this is local issue for me, but I imagine I
>> will not be alone.
>
> You are not alone Robert.
> I think HTTPS *and* at least read-only access through HTTP is a must.
>

Git does support both -- if it's on Github, you get it for free.

For "pushing" stuff to other people's repository, there's a way to
send the changesets as email -- git-am I believe is the term to
Google. :)

HTH

--
Dean Michael Berris
about.me/deanberris

Dean Michael Berris

unread,
Jan 28, 2011, 4:41:16 AM1/28/11
to boost...@lists.boost.org
On Fri, Jan 28, 2011 at 5:26 PM, Dean Michael Berris
<mikhai...@gmail.com> wrote:
> On Fri, Jan 28, 2011 at 5:24 PM, Mateusz Loskot <mat...@loskot.net> wrote:
>>
>> You are not alone Robert.
>> I think HTTPS *and* at least read-only access through HTTP is a must.
>>
>
> Git does support both -- if it's on Github, you get it for free.
>
> For "pushing" stuff to other people's repository, there's a way to
> send the changesets as email -- git-am I believe is the term to
> Google. :)
>

Sorry, git-am is to apply changesets from a mailbox/email. git-patch
is the way to format patches as emails. :)

Sebastian Redl

unread,
Jan 28, 2011, 5:33:27 AM1/28/11
to boost...@lists.boost.org
On 28.01.2011 10:06, Dean Michael Berris wrote:
> I have to be honest here and say up front that I have no idea what the
> features of mercurial are, so I have some questions with it in
> particular:
> 2. Does it allow for compacting and local compression of assets? Git
> has a rich set of tools for compressing and dealing with local
> repositories. It also has a very efficient way of preserving objects
> across branches and what not.
Mercurial has a completely different storage format from Git.

> 3. Does mercurial work in "email" mode? Git has a way of submitting
> patches via email -- and have the same email read-in by git and parsed
> as an actual "merge". This is convenient for discussing patches in the
> mailing list and preserving the original message/discussion. This
> gives people a chance to publicly review the changes and import the
> same changeset from the same email message.
Yes, Mercurial can format changesets as emails.

> 4. How does mercurial deal with forks? In Git a repository is
> automatically a fork of the source repository. I don't know whether
> every mercurial repo is the same as a Git repo though -- meaning
> whether the same repository can be exposed to a number of protocols
> and dealt with like any other Git repo (push/pull/merge/compact, etc.)
Hg and Git deal with forks pretty much the same way. There are some
minor differences in the handling of anonymous branching within a single
clone (i.e. what happens when you are not on the most recent commit and
do a commit yourself), I believe.
Hg actually has a plug-in that lets it push and pull to/from a Git server.

>> Also, Mercurial works better on Windows than git does in my experience
>> --- the git port for Windows is relatively recent, whereas Mercurial has
>> supported Windows for a while. Since many of the boost developers use
>> Windows I would have thought this was an important consideration. I
>> haven't any personal experience of bazaar, so don't know how it fares in
>> this regard.
>>
> I've used Msysgit for the most part, and it works very well --
> actually, works the same in Linux as it does in Windows. Are we
> talking about the same Windows port of Git?
I don't think Git currently has any integration plug-ins like TortoiseHg
(Explorer) or VisualHg (Visual Studio).

The think I like most about Git over Mercurial is the extensive history
rewriting capability (hg rebase -i). Wonderful for cleaning up my local
commit mess before pushing.

Sebastian

Anthony Williams

unread,
Jan 28, 2011, 5:34:27 AM1/28/11
to boost...@lists.boost.org
Dean Michael Berris <mikhai...@gmail.com> writes:

> On Fri, Jan 28, 2011 at 4:24 PM, Anthony Williams <antho...@gmail.com> wrote:
>> Edward Diener <eldi...@tropicsoft.com> writes:
>>
>>> On 1/27/2011 12:52 PM, Beman Dawes wrote:
>>>> Independent of modularization, ryppl, or anything else, is it time to
>>>> start a discussion on the main list about moving to Git?
>>>
>>> I hope such a discussion entails a very strong justification of why
>>> Git is better than Subversion. I still do not buy it, and only find
>>> Git more complicated and harder to use than Subversion with little
>>> advantage. I fear very much an "emperor's new clothes" situation where
>>> everyone is jumping on a bandwagon, because it is the latest thing to
>>> do, but no one is bothering to explain why this latest thing has any
>>> value to Boost.
>>
>> Indeed. Also, why git rather than another DVCS such as Mercurial or
>> bazaar? Personally, I find Mercurial much easier to use than git, and it
>> has the same major advantages (which are essentially common to all DVCS
>> systems).
>>
>
> I have to be honest here and say up front that I have no idea what the
> features of mercurial are, so I have some questions with it in
> particular:

For a quick summary of the similarities and differences, see
http://stackoverflow.com/questions/1598759/git-and-mercurial-compare-and-contrast

> 1. Does it allow for integrating GnuPG signatures in the commit
> messages/history? The popular way for certifying that something is
> "official" or "is signed off on by <insert maintainer here>" is
> through GnuPG PKI. This is what makes the Linux kernel dev
> organization more like a self-organizing matter.

Yes. See http://mercurial.selenic.com/wiki/GpgExtension

> 2. Does it allow for compacting and local compression of assets? Git
> has a rich set of tools for compressing and dealing with local
> repositories. It also has a very efficient way of preserving objects
> across branches and what not.

Mercurial does compress the repository. How it compares with git, I
don't know.

> 3. Does mercurial work in "email" mode? Git has a way of submitting
> patches via email -- and have the same email read-in by git and parsed
> as an actual "merge". This is convenient for discussing patches in the
> mailing list and preserving the original message/discussion. This
> gives people a chance to publicly review the changes and import the
> same changeset from the same email message.

>From Mercurial, you can export patches to a text file containing the
diffs and a few headers, and import that text file into another repo,
where it preserves the commit message. Is that the sort of thing you
meant?

> 4. How does mercurial deal with forks? In Git a repository is
> automatically a fork of the source repository. I don't know whether
> every mercurial repo is the same as a Git repo though -- meaning
> whether the same repository can be exposed to a number of protocols
> and dealt with like any other Git repo (push/pull/merge/compact, etc.)

Your local repository can push/pull from any remote repository, and you
can set up a default remote repo for "hg push" and "hg pull" without a
repository path. I don't know the full set of protocol options; I use
local and http access.

>> Also, Mercurial works better on Windows than git does in my experience
>> --- the git port for Windows is relatively recent, whereas Mercurial has
>> supported Windows for a while. Since many of the boost developers use
>> Windows I would have thought this was an important consideration. I
>> haven't any personal experience of bazaar, so don't know how it fares in
>> this regard.
>>
>
> I've used Msysgit for the most part, and it works very well --
> actually, works the same in Linux as it does in Windows. Are we
> talking about the same Windows port of Git?

The old port was cygwin based, and was a real pain. I tried using
msysgit and had a few problems, but it was an early version. It might be
much better now. OTOH, Mercurial has always "just worked" for me, on
both Windows and Linux.

Like I said above, my personal opinion is that mercurial is easier to
use. YMMV. I also know people who a big fans of bazaar, but I've never
used it myself.

Anthony
--
Author of C++ Concurrency in Action http://www.stdthread.co.uk/book/
just::thread C++0x thread library http://www.stdthread.co.uk
Just Software Solutions Ltd http://www.justsoftwaresolutions.co.uk
15 Carrallack Mews, St Just, Cornwall, TR19 7UL, UK. Company No. 5478976

_______________________________________________

Matthieu Brucher

unread,
Jan 28, 2011, 5:57:41 AM1/28/11
to boost...@lists.boost.org

I've used Msysgit for the most part, and it works very well --
actually, works the same in Linux as it does in Windows. Are we
talking about the same Windows port of Git?
I don't think Git currently has any integration plug-ins like TortoiseHg (Explorer) or VisualHg (Visual Studio).

There is a TortoiseGit that relies on MsysGit. The biggest worry of Git on Windows is that you don't have access in the command line, you have to launch git bash for this (and then you're screwed if you want to change disk). So TortoiseGit is the only hope for Git on Windows at the moment.

Matthieu
--
Information System Engineer, Ph.D.
Blog: http://matt.eifelle.com
LinkedIn: http://www.linkedin.com/in/matthieubrucher

Diederick C. Niehorster

unread,
Jan 28, 2011, 6:03:06 AM1/28/11
to boost...@lists.boost.org
On Fri, Jan 28, 2011 at 18:33, Sebastian Redl
<sebasti...@getdesigned.at> wrote:
> On 28.01.2011 10:06, Dean Michael Berris wrote:
>> 4. How does mercurial deal with forks? In Git a repository is
>> automatically a fork of the source repository. I don't know whether
>> every mercurial repo is the same as a Git repo though -- meaning
>> whether the same repository can be exposed to a number of protocols
>> and dealt with like any other Git repo (push/pull/merge/compact, etc.)
>
> Hg and Git deal with forks pretty much the same way. There are some minor
> differences in the handling of anonymous branching within a single clone
> (i.e. what happens when you are not on the most recent commit and do a
> commit yourself), I believe.
> Hg actually has a plug-in that lets it push and pull to/from a Git server.

I think this is a very important point for this discussion. Both Hg
and bzr come with a plugin that lets them work with a git
branch/server.
Hence, as the current work with git is well underway and git is very
popular out there, we might as well stick with it. Users that however
feel more comfortable using hg or bzr can do so with no obstacle
(assuming these plugins work well, haven't tried advanced usage cases
myself).

>
> The think I like most about Git over Mercurial is the extensive history
> rewriting capability (hg rebase -i). Wonderful for cleaning up my local
> commit mess before pushing.

Yup, bzr can do such things too (also useful if you accidentally
committed files in some revision that you'd like to get rid off from
all revisions, though I believe that was a different command).

Best,
Dee

Mateusz Loskot

unread,
Jan 28, 2011, 6:12:32 AM1/28/11
to boost...@lists.boost.org
On 28/01/11 10:57, Matthieu Brucher wrote:
>
> I've used Msysgit for the most part, and it works very well --
> actually, works the same in Linux as it does in Windows. Are we
> talking about the same Windows port of Git?
>
> I don't think Git currently has any integration plug-ins like
> TortoiseHg (Explorer) or VisualHg (Visual Studio).
>
> There is a TortoiseGit that relies on MsysGit. The biggest worry of Git
> on Windows is that you don't have access in the command line, you have
> to launch git bash for this (and then you're screwed if you want to
> change disk)

It is not true. I never use the git bash. I add the ${GITINSTALL}\bin
to my PATH and voilà.

Best regards,
--
Mateusz Loskot, http://mateusz.loskot.net
Charter Member of OSGeo, http://osgeo.org
Member of ACCU, http://accu.org

Dean Michael Berris

unread,
Jan 28, 2011, 6:09:41 AM1/28/11
to boost...@lists.boost.org
On Fri, Jan 28, 2011 at 6:34 PM, Anthony Williams <antho...@gmail.com> wrote:
> Dean Michael Berris <mikhai...@gmail.com> writes:
>>
>>
>> I have to be honest here and say up front that I have no idea what the
>> features of mercurial are, so I have some questions with it in
>> particular:
>
> For a quick summary of the similarities and differences, see
> http://stackoverflow.com/questions/1598759/git-and-mercurial-compare-and-contrast
>

Thanks for the link -- that was a pretty long accepted answer. :)

>> 1. Does it allow for integrating GnuPG signatures in the commit
>> messages/history? The popular way for certifying that something is
>> "official" or "is signed off on by <insert maintainer here>" is
>> through GnuPG PKI. This is what makes the Linux kernel dev
>> organization more like a self-organizing matter.
>
> Yes. See http://mercurial.selenic.com/wiki/GpgExtension
>

Okay.

>> 2. Does it allow for compacting and local compression of assets? Git
>> has a rich set of tools for compressing and dealing with local
>> repositories. It also has a very efficient way of preserving objects
>> across branches and what not.
>
> Mercurial does compress the repository. How it compares with git, I
> don't know.
>

Okay.

>> 3. Does mercurial work in "email" mode? Git has a way of submitting
>> patches via email -- and have the same email read-in by git and parsed
>> as an actual "merge". This is convenient for discussing patches in the
>> mailing list and preserving the original message/discussion. This
>> gives people a chance to publicly review the changes and import the
>> same changeset from the same email message.
>
> >From Mercurial, you can export patches to a text file containing the
> diffs and a few headers, and import that text file into another repo,
> where it preserves the commit message. Is that the sort of thing you
> meant?
>

Well, not really -- git has git-format-patch that actually crafts an
appropriately encoded email message. Git actually has support for
importing patches from a mail message directly.

>> 4. How does mercurial deal with forks? In Git a repository is
>> automatically a fork of the source repository. I don't know whether
>> every mercurial repo is the same as a Git repo though -- meaning
>> whether the same repository can be exposed to a number of protocols
>> and dealt with like any other Git repo (push/pull/merge/compact, etc.)
>
> Your local repository can push/pull from any remote repository, and you
> can set up a default remote repo for "hg push" and "hg pull" without a
> repository path. I don't know the full set of protocol options; I use
> local and http access.
>

Okay, but I think the thing I was asking was whether the same two
repositories share the same history information?

>>
>> I've used Msysgit for the most part, and it works very well --
>> actually, works the same in Linux as it does in Windows. Are we
>> talking about the same Windows port of Git?
>
> The old port was cygwin based, and was a real pain. I tried using
> msysgit and had a few problems, but it was an early version. It might be
> much better now. OTOH, Mercurial has always "just worked" for me, on
> both Windows and Linux.
>

Ok.

> Like I said above, my personal opinion is that mercurial is easier to
> use. YMMV. I also know people who a big fans of bazaar, but I've never
> used it myself.
>

I agree.

However since hg and git can work with each other, I don't see why
using either one would be a big problem as both have a pretty similar
model looking at it from the outside. I'd love to hear from someone
who uses bzr though.

Thanks again Anthony!

--
Dean Michael Berris
about.me/deanberris

Anthony Williams

unread,
Jan 28, 2011, 6:25:16 AM1/28/11
to boost...@lists.boost.org
Dean Michael Berris <mikhai...@gmail.com> writes:

> On Fri, Jan 28, 2011 at 6:34 PM, Anthony Williams <antho...@gmail.com> wrote:
>> Dean Michael Berris <mikhai...@gmail.com> writes:
>>> 4. How does mercurial deal with forks? In Git a repository is
>>> automatically a fork of the source repository. I don't know whether
>>> every mercurial repo is the same as a Git repo though -- meaning
>>> whether the same repository can be exposed to a number of protocols
>>> and dealt with like any other Git repo (push/pull/merge/compact, etc.)
>>
>> Your local repository can push/pull from any remote repository, and you
>> can set up a default remote repo for "hg push" and "hg pull" without a
>> repository path. I don't know the full set of protocol options; I use
>> local and http access.
>>
>
> Okay, but I think the thing I was asking was whether the same two
> repositories share the same history information?

Yes. A Mercurial clone is a full copy of the source repo, including all
history.

> However since hg and git can work with each other, I don't see why
> using either one would be a big problem as both have a pretty similar
> model looking at it from the outside.

That's true. I might try out the hg-git extension
(http://mercurial.selenic.com/wiki/HgGit)

Anthony
--
Author of C++ Concurrency in Action http://www.stdthread.co.uk/book/
just::thread C++0x thread library http://www.stdthread.co.uk
Just Software Solutions Ltd http://www.justsoftwaresolutions.co.uk
15 Carrallack Mews, St Just, Cornwall, TR19 7UL, UK. Company No. 5478976

_______________________________________________

Frédéric Bron

unread,
Jan 28, 2011, 6:28:26 AM1/28/11
to boost...@lists.boost.org
>   The ability to do incremental check-ins without requiring a network
>   connection is a huge win for me.

This point is really key for me too. I am using git for all my work
and svn only for boost. I am currently working on an extension of the
type traits libraries and it is a real pain to not being able to do:

- partial commits (= do not commit every thing)
- local commits (= commit things but do not bother others, i.e. no
need to wait for a perfect solution before commiting)

>   In my experience, branching is cheaper and much lighter-weight in
>   Git than in SVN.

That is very interesting also: branching is very very easy.

For me cvs is 1, svn is 2 and git is 1000. I would love if boost switch to git.

Frédéric

Vladimir Prus

unread,
Jan 28, 2011, 7:06:25 AM1/28/11
to boost...@lists.boost.org
Dean Michael Berris wrote:

>> >From Mercurial, you can export patches to a text file containing the
>> diffs and a few headers, and import that text file into another repo,
>> where it preserves the commit message. Is that the sort of thing you
>> meant?
>>
>
> Well, not really -- git has git-format-patch that actually crafts an
> appropriately encoded email message.

And you have even push those messages to the Draft folder of your IMAP
email server. In practice, though, it's not like using attachments is
too painful, and for most practical cases, the time you need to work
with the patch on both ends is far greater than time spend attaching
a file and then saving an attachment.

And 'git am' is actually strong candidate for the worst command in
git. You gonna love those .rej files and how 'git mergetool' does
not work git 'git am' fails.

So, I don't think git sets any points on this particular item.

- Volodya

--
Vladimir Prus
Mentor Graphics
+7 (812) 677-68-40

Dave Abrahams

unread,
Jan 28, 2011, 7:24:04 AM1/28/11
to boost...@lists.boost.org
On Thu, Jan 27, 2011 at 1:26 PM, Robert Ramey <ra...@rrsd.com> wrote:
> Beman Dawes wrote:

>> Independent of modularization, ryppl, or anything else, is it time to
>> start a discussion on the main list about moving to Git?
>
> To me, this illustrates a fundamental problem.  If the issue of
> modularization were addressed, there would be no requirement
> that all libraries use the same version control system.  That is,
> change to a different version control system would occur
> one library at time.
>
> Same can be said for the build system.

In principle, true. In practice, we need some consistency across
boost or it will become hard for the community to contribute, and
especially hard for people (including release managers) to step in and
fix things, or even assemble a distribution. This is to say nothing
of automated testing.

> The only coupling really required between libraries is
>
> a) namespace coordination
> b) directory structure - to some extent at least at the top levels
> c) quality standards
>    i) testing

introduces a build system dependency.

>    ii) platform coverage
>    iii) documentation requirements
>
> If coupling is required somewhere else, it's an error that
> is holding us back.

I don't believe such an error exists here.

--
Dave Abrahams
BoostPro Computing
http://www.boostpro.com

Dave Abrahams

unread,
Jan 28, 2011, 8:39:14 AM1/28/11
to boost...@lists.boost.org
We're having this discussion on the wrong list, IMO. I suggest moving
it to the developers' list.

2011/1/28 Frédéric Bron <freder...@m4x.org>:

--

Dave Abrahams
BoostPro Computing
http://www.boostpro.com

Edward Diener

unread,
Jan 28, 2011, 8:42:32 AM1/28/11
to boost...@lists.boost.org
On 1/28/2011 2:12 AM, Anthony Foiani wrote:
>
> Edward --
>
> Edward Diener<eldi...@tropicsoft.com> writes:
>> I hope such a discussion entails a very strong justification of why
>> Git is better than Subversion. I still do not buy it, and only find
>> Git more complicated and harder to use than Subversion with little
>> advantage. [...], but no one is bothering to explain why this
>> latest thing has any value to Boost.
>
> For my own development efforts, I've found Git to be an improvement
> over Subversion in the following ways:
>
> 1. Detached development.
>
> The ability to do incremental check-ins without requiring a network
> connection is a huge win for me.

Why do you mean by "incremental checkins" ? If I use SVN I can make as
many changes locally as I want.

>
> 2. Data backup.
>
> If every developer (more, every developer's computer) has a full
> copy of the history on it, that is more distributed and easier to
> obtain than making sure you have transaction-perfect replication of
> your master SVN repository. (Or, at least, it was for me.)

"More distributed" means nothing to me. Someone really needs to justify
this distributed development idea with something more than "its
distributed so it must be good".

>
> 3. Experimentation.
>
> In my experience, branching is cheaper and much lighter-weight in
> Git than in SVN.

Please explain "cheaper and lighter weight" ?

It is all this rhetoric that really bothers me from developers on the
Git bandwagon. I would love to see real technical proof.

Vladimir Prus

unread,
Jan 28, 2011, 8:48:27 AM1/28/11
to boost...@lists.boost.org
Dave Abrahams wrote:

> We're having this discussion on the wrong list, IMO. I suggest moving
> it to the developers' list.

May I suggest that to keep signal/noise ratio on the list to acceptable
level, we don't have a general VC shootout discussion. It will never lead
to anything. Rather, the parties interested in having Boost switch to any
version control system that is not SVN should propose a specific plan,
including hosting, administration, adjustment of all scripts, new workflows
for Boost maintainers and authors of proposed libraries, etc?

If would be seriously not funny if 1000 messages later we'll find that
git is about to solve every problem on earth, but there's nobody to
do *complete* transition and onging maintenance.

- Volodya

--
Vladimir Prus
Mentor Graphics
+7 (812) 677-68-40

_______________________________________________

Diederick C. Niehorster

unread,
Jan 28, 2011, 8:49:24 AM1/28/11
to boost...@lists.boost.org
On Fri, Jan 28, 2011 at 21:39, Dave Abrahams <da...@boostpro.com> wrote:
> We're having this discussion on the wrong list, IMO.  I suggest moving
> it to the developers' list.

Frankly, I do find it interesting to read what those with a lot of
experience--informed opinions--have to say about the matter! Bet I'm
not the only one.

Best,
Dee

Edward Diener

unread,
Jan 28, 2011, 8:46:56 AM1/28/11
to boost...@lists.boost.org

I do not follow why these are advantages. I can make any changes locally
for files using SVN without having to have a connection to the SVN
server. Your phrase "incremental local commits" sounds like more Git
rhetoric to me. How does this differ from just changing files locally
under SVN ?

Eric J. Holtman

unread,
Jan 28, 2011, 8:53:29 AM1/28/11
to boost...@lists.boost.org
On 1/28/2011 7:49 AM, Diederick C. Niehorster wrote:
> On Fri, Jan 28, 2011 at 21:39, Dave Abrahams <da...@boostpro.com> wrote:
>> We're having this discussion on the wrong list, IMO. I suggest moving
>> it to the developers' list.
>
> Frankly, I do find it interesting to read what those with a lot of
> experience--informed opinions--have to say about the matter! Bet I'm
> not the only one.
>


The developer's list isn't restricted access, is it?

Eric J. Holtman

unread,
Jan 28, 2011, 8:55:50 AM1/28/11
to boost...@lists.boost.org
On 1/28/2011 7:46 AM, Edward Diener wrote:

>
> I do not follow why these are advantages. I can make any changes locally
> for files using SVN without having to have a connection to the SVN
> server. Your phrase "incremental local commits" sounds like more Git
> rhetoric to me. How does this differ from just changing files locally
> under SVN ?
>


You can check in while you work. Which means you
don't have to worry about "breaking the build", or anything
like that.

Write some code, test it, seems to work, check it in. Come
back after lunch, discover it's fubar, revert. Lather, rinse
repeat.

I still use SVN, but I occasionally think about switching.

Sebastian Redl

unread,
Jan 28, 2011, 9:01:02 AM1/28/11
to boost...@lists.boost.org
On 28.01.2011 14:46, Edward Diener wrote:
>
> I do not follow why these are advantages. I can make any changes
> locally for files using SVN without having to have a connection to the
> SVN server. Your phrase "incremental local commits" sounds like more
> Git rhetoric to me. How does this differ from just changing files
> locally under SVN ?
There is a difference between changing a file and committing a change.
One is just changed file data. The other is a record of that change in
the VCS. DVCSs allow you to do commits locally, without having a
connection to some central server.

Sebastian

Christopher Jefferson

unread,
Jan 28, 2011, 9:03:07 AM1/28/11
to boost...@lists.boost.org

On 28 Jan 2011, at 13:42, Edward Diener wrote:

> On 1/28/2011 2:12 AM, Anthony Foiani wrote:
>>
>> Edward --
>>
>> Edward Diener<eldi...@tropicsoft.com> writes:
>>> I hope such a discussion entails a very strong justification of why
>>> Git is better than Subversion. I still do not buy it, and only find
>>> Git more complicated and harder to use than Subversion with little
>>> advantage. [...], but no one is bothering to explain why this
>>> latest thing has any value to Boost.
>>
>> For my own development efforts, I've found Git to be an improvement
>> over Subversion in the following ways:
>>
>> 1. Detached development.
>>
>> The ability to do incremental check-ins without requiring a network
>> connection is a huge win for me.
>
> Why do you mean by "incremental checkins" ? If I use SVN I can make as many changes locally as I want.

With 'git' you can commit those incremental checkins to your local repository. You can then decide later to either push them all up to the boost repository, merge them into a single commit, or abandon them.


>>
>> 3. Experimentation.
>>
>> In my experience, branching is cheaper and much lighter-weight in
>> Git than in SVN.
>
> Please explain "cheaper and lighter weight" ?
>
> It is all this rhetoric that really bothers me from developers on the Git bandwagon. I would love to see real technical proof.

I'm not sure what you mean by "technical proof", however we switched from svn to git at work. It is very easy to say "apply the commits X,Y and Z from branch A to branch B", whereas or to keep multiple branches in sync.

We found this basically impossible to do in svn and it is necessary to manually keep track of patches which need applying. boost appears to have a similar problem, requiring frequent manual diffs between head and release to find patches which have not been applied yet.

Chris

Felipe Magno de Almeida

unread,
Jan 28, 2011, 9:07:01 AM1/28/11
to boost...@lists.boost.org
On Fri, Jan 28, 2011 at 8:57 AM, Matthieu Brucher
<matthieu...@gmail.com> wrote:
>>>>
>>> I've used Msysgit for the most part, and it works very well --
>>> actually, works the same in Linux as it does in Windows. Are we
>>> talking about the same Windows port of Git?
>>
>> I don't think Git currently has any integration plug-ins like TortoiseHg
>> (Explorer) or VisualHg (Visual Studio).
>>
> There is a TortoiseGit that relies on MsysGit. The biggest worry of Git on
> Windows is that you don't have access in the command line, you have to
> launch git bash for this (and then you're screwed if you want to change
> disk). So TortoiseGit is the only hope for Git on Windows at the moment.

I use it on command line in Windows all the time.

> Matthieu
> --
> Information System Engineer, Ph.D.
> Blog: http://matt.eifelle.com
> LinkedIn: http://www.linkedin.com/in/matthieubrucher

Regards,
--
Felipe Magno de Almeida

Anthony Williams

unread,
Jan 28, 2011, 9:29:06 AM1/28/11
to boost...@lists.boost.org
Edward Diener <eldi...@tropicsoft.com> writes:

By "incremental local commits", I meant that I can make a small change
and commit it to the VCS locally whilst offline. I can then make another
and another and another, rollback some changes and make some more, and
commit that, and so forth. Then, later, I can upload the whole bunch of
commits (with the log messages I made at the time) to the remote server.

e.g. I'm working on a new, complex feature that impacts a lot of
stuff. I can write a test, make it pass, and check in the change
locally. When I'm working well I can make such commits every few
minutes. With a remote server this can be slow and painful.

Also, it doesn't matter if my changes leave partially complete features
that won't integrate well, as no-one else can use it. From an SVN
perspective, it's like having a private branch, and only merging to
trunk at carefully chosen points. However, DVCSs tend to have much
better handling of branching and merging than SVN, so merging that
private branch to trunk when someone else has merged their changes in
the mean time is much less of an ordeal.

I can also rollback locally to an older revision whilst offline, do
diffs and merges between branches whilst offline, and then push changes
to the remote server later when online. The lack of a need for a remote
connection for such things can make them considerably faster.

Anthony
--
Author of C++ Concurrency in Action http://www.stdthread.co.uk/book/
just::thread C++0x thread library http://www.stdthread.co.uk
Just Software Solutions Ltd http://www.justsoftwaresolutions.co.uk
15 Carrallack Mews, St Just, Cornwall, TR19 7UL, UK. Company No. 5478976

_______________________________________________

Ted Byers

unread,
Jan 28, 2011, 10:59:38 AM1/28/11
to boost...@lists.boost.org
>From: boost-use...@lists.boost.org
[mailto:boost-use...@lists.boost.org] On Behalf Of Eric J. Holtman
>Sent: January-28-11 8:56 AM
>To: boost...@lists.boost.org
>Subject: Re: [Boost-users] What's happened to Ryppl?

>
>>On 1/28/2011 7:46 AM, Edward Diener wrote:
>
>
>> I do not follow why these are advantages. I can make any changes
>> locally for files using SVN without having to have a connection to the
>> SVN server. Your phrase "incremental local commits" sounds like more
>> Git rhetoric to me. How does this differ from just changing files
>> locally under SVN ?
>>
>
>
>You can check in while you work. Which means you
>don't have to worry about "breaking the build", or anything like that.
>
>Write some code, test it, seems to work, check it in. Come back after
lunch, discover it's fubar, revert. Lather, rinse repeat.

This may be adequate IF you're working alone and if you have all the time in
the world, but it will become an unmaintainable nightmare when the number of
programmers contributing to the project increases significantly and as time
pressures grow. Imagine the chaos that would result from this if you had a
dozen programmers doing this independantly to the same codebase.

And actually, I don't care which version control software is in use, as long
as it is used well. Each product has strengths and weaknesses, and thus
some will have strong preferences for thos eproducts that most closely
reflect their own tastes. Unless a given product's developers are
blithering idiots, a rational argument can be made for using their product.
And then, the final decision is made either by the team, democratically, or
autocratically by a project manager or team lead. And if a change is to be
made, the onus is on the proponents of the change to first prove the change
is wise and then to provide/plan the means of making the change. It is
folly to decide to make a change, even if to a demonstrably superior
product, if there are insufficient resources (including time) to get it
done; especially if the existing setup is working adequately. A practical
rogrammer, while having preferences for certain tools over others, will be
flexible enough to work with whatever is in place for a project to which he
has been assigned to contribute (in a commercial setting) or to which he
wishes to contribute in other cases.

As one responsible for a small development team, and who has to keep junior
and intermediate programmers on track, this notion of " Write some code,


test it, seems to work, check it in. Come back after lunch, discover it's

fubar, revert. Lather, rinse repeat" scares me. When you're dealing with a
commercial app that has half a million lines of code, or more, it is just
too easy to break things (there are practical and commercial reasons for
rational modularization as results in object oriented programming, as well
as proper management/design of compilation units, &c.).

I would prefer a model where there is a highly developed suite of test code
(actually, I find it is often best if one begins with the tests first - but
sometimes that is not practical), unit tests, integration tests and
usability tests, and nothing gets checked in unless the code base plus the
new code not only compiles, but the developer can show that with his
implemented changes, the system still passes all tests. And note, his new
code must come with a test suite of its own, and must pass through a code
review before we accept that his tests are adequate and thus before he can
run the full test suite. With this model, it happens that new code stresses
existing code in initially unexpected ways, revealing previously undetected
bugs. But at the same time, it makes it less likely that new code will
introduce a significant number of new bugs when it is approved to be
commited to the codebase. And this means that while the new code that is
approved to be commited will have the same number of bugs per thousand lines
of code that most other programmers experience in their code, the number of
bugs per thousand lines of code can only decrease. And where the version
control software in place does not support a given detail of this model (and
note, this model can be made to work even with something as primitive as RCS
or CVS), we need a manual process to make it work. In my practice, no
member of my team commits anything until we know it works with everything
already in the repository - no exceptions. This means that sometimes a
programmer will work on an assigned task for days, or even a week, without
commiting changes to the repository. This actually help productivity rates
since we waste much less time tracking down bugs that had been introduced
weeks or months earlier, and then fixing them along with new code that
unwittingly depended on code that was broken. Until I learned this, I
occassionallt saw situations where weeks or months worth of work had to be
discarded because of dependancies within code along with months old code
that had subtle bugs (but then, I have been doing this for 30+ years in a
number of different languages, and the commercial practice of software
development wasn't then what it is now).

Cheers

Ted

Eric J. Holtman

unread,
Jan 28, 2011, 11:07:05 AM1/28/11
to boost...@lists.boost.org
On 1/28/2011 9:59 AM, Ted Byers wrote:

>
> This may be adequate IF you're working alone and if you have all the time in
> the world, but it will become an unmaintainable nightmare when the number of
> programmers contributing to the project increases significantly and as time
> pressures grow. Imagine the chaos that would result from this if you had a
> dozen programmers doing this independantly to the same codebase.

You proceed from a false assumption. You might have
1000 commits in your local copy. When you're done, and
ready to publish, you "push" it as one.

No different than SVN.

> commercial app that has half a million lines of code, or more, it is just
> too easy to break things (there are practical and commercial reasons for
> rational modularization as results in object oriented programming, as well
> as proper management/design of compilation units, &c.).
>

Again, no one sees your changes until your ready. It's
there to help the individual. How often have you been
in the middle of a large change to quite a few files, then
said "oh, this sucks", and revert it. Then, a half
hour later, you decide "Oh, wait, 100 of those lines (out of
the 1000 you just threw away) might be useful".

With SVN, you're hosed. With git, you're not.

Joel....@lri.fr

unread,
Jan 28, 2011, 11:10:18 AM1/28/11
to boost...@lists.boost.org
> As one responsible for a small development team, and who has to keep
> junior
> and intermediate programmers on track, this notion of " Write some code,
> test it, seems to work, check it in. Come back after lunch, discover it's
> fubar, revert. Lather, rinse repeat" scares me.

you check in lcoally, nothing break until the changes are pulled in the
main code base

Matthieu Brucher

unread,
Jan 28, 2011, 11:13:24 AM1/28/11
to boost...@lists.boost.org
I would prefer a model where there is a highly developed suite of test code
(actually, I find it is often best if one begins with the tests first - but
sometimes that is not practical), unit tests, integration tests and
usability tests, and nothing gets checked in unless the code base plus the
new code not only compiles, but the developer can show that with his
implemented changes, the system still passes all tests.  And note, his new
code must come with a test suite of its own, and must pass through a code
review before we accept that his tests are adequate and thus before he can
run the full test suite.  With this model, it happens that new code stresses
existing code in initially unexpected ways, revealing previously undetected
bugs.  But at the same time, it makes it less likely that new code will
introduce a significant number of new bugs when it is approved to be
commited to the codebase.  And this means that while the new code that is
approved to be commited will have the same number of bugs per thousand lines
of code that most other programmers experience in their code, the number of
bugs per thousand lines of code can only decrease.  And where the version
control software in place does not support a given detail of this model (and
note, this model can be made to work even with something as primitive as RCS
or CVS), we need a manual process to make it work.  In my practice, no
member of my team commits anything until we know it works with everything
already in the repository - no exceptions.  This means that sometimes a
programmer will work on an assigned task for days, or even a week, without
commiting changes to the repository.

I agree with except for the latest phrase. This is exactly what DVCS shines at, BUT you can commit regularly. Why is this important? To have small increments that helps bisecting for bugs. Of course, you don't commit to the main repository each time, of course, it has to be reviewed, of course it has to pass all the tests... but in addition, you get the history, and I was saved many times by this feature (and of course, it has to compile each time, because without this, you can't bisect ;)).

Cheers,

Matthieu 

Robert Ramey

unread,
Jan 28, 2011, 11:45:48 AM1/28/11
to boost...@lists.boost.org
Dave Abrahams wrote:
> On Thu, Jan 27, 2011 at 1:26 PM, Robert Ramey <ra...@rrsd.com> wrote:
>> Beman Dawes wrote:
>
>>> Independent of modularization, ryppl, or anything else, is it time
>>> to start a discussion on the main list about moving to Git?
>>
>> To me, this illustrates a fundamental problem. If the issue of
>> modularization were addressed, there would be no requirement
>> that all libraries use the same version control system. That is,
>> change to a different version control system would occur
>> one library at time.
>>
>> Same can be said for the build system.
>
> In principle, true. In practice, we need some consistency across
> boost or it will become hard for the community to contribute, and
> especially hard for people (including release managers) to step in and
> fix things, or even assemble a distribution. This is to say nothing
> of automated testing.
>
>> The only coupling really required between libraries is
>>
>> a) namespace coordination
>> b) directory structure - to some extent at least at the top levels
>> c) quality standards
>>d) testing

>
> introduces a build system dependency.

A particular library will depend upon at least one build/test
system. But that doesn't imply that all libraries have to depend
on the same one. In fact, we already have the situation that
many (all?) libraries can be built/tested with bjam or Ctest.

The "boost test" would be just the union of the test procedure
implemented for each library. That is

foreach(library L)
lib/L/test/test.bat // or /lib/L/test.sh

The current approach implements the view of Boost as a
particular set of libraries ONLY built/tested/distributed as an
whole.

My view is that is not scaling well and can never do so.

Each library should be "buildable, testable, and distributable"
on it's own. The "official boost distribution" would be just
the union of all the certified boost libraries. Of course anyone
could make his own subdistribution if he want's to. Already
we have a step in this direction with B?P (distribute one library
and all it's pre-requisites).

I envision the future of boost that looks more like sourceforge
but with all libraries meeting the boost requirements. I see boost
as spending more time on reviews and less time on testing,
packaging, etc. I see "packaging/distribution" as being handled
by anyone who wants to create any subset of the boost libraries.
Finally, I see the testing as being done by each user to get
a wider coverage.

I see the centralized functions being limited to:
a) reviews/certification
b) accumulation of testing results
c) coordination/maintainence of standards (a-d above)
d) promotion of developer practices compatible with
the above (licenses, etc).

Suppose such an environment existed today. The whole
issue of moving to git wouldn't be an issue. Each library
author could use which ever system he preferred. Movement
to git could proceed on a library by library basis if/when
other developers were convinced it was an improvement.
It would be one less thing to spend time on.

>> ii) platform coverage
>> iii) documentation requirements
>>
>> If coupling is required somewhere else, it's an error that
>> is holding us back.
>
> I don't believe such an error exists here.

Robert Ramey

Dave Abrahams

unread,
Jan 28, 2011, 1:00:51 PM1/28/11
to boost...@lists.boost.org
At Fri, 28 Jan 2011 08:45:48 -0800,


Again, true in principle, but IMO not workable in practice, for the same
reasons I just cited.

> In fact, we already have the situation that
> many (all?) libraries can be built/tested with bjam or Ctest.
>
> The "boost test" would be just the union of the test procedure
> implemented for each library. That is
>
> foreach(library L)
> lib/L/test/test.bat // or /lib/L/test.sh
>
> The current approach implements the view of Boost as a
> particular set of libraries ONLY built/tested/distributed as an
> whole.
>
> My view is that is not scaling well and can never do so.

+1

Still, that doesn't mean we're going to be more nimble and scalable if
there's no standardization of tools across Boost. Quite the contrary,
IMO. I can imagine all kinds of problems coming up that are simply
ruled out by using the same tools.

> Each library should be "buildable, testable, and distributable"
> on it's own.

Except that there are interdependencies among some of the libraries.
How many build tools should you need in order to install
Boost.Serialization?

> I see the centralized functions being limited to:
> a) reviews/certification
> b) accumulation of testing results
> c) coordination/maintainence of standards (a-d above)
> d) promotion of developer practices compatible with
> the above (licenses, etc).
>
> Suppose such an environment existed today. The whole
> issue of moving to git wouldn't be an issue. Each library
> author could use which ever system he preferred. Movement
> to git could proceed on a library by library basis if/when
> other developers were convinced it was an improvement.
> It would be one less thing to spend time on.

Or, it could be one more thing to spend time on.

Standardization != coordination, and while coordination can slow
things down, standardization brings efficiencies.

--
Dave Abrahams
BoostPro Computing
http://www.boostpro.com

_______________________________________________

Ted Byers

unread,
Jan 28, 2011, 1:12:06 PM1/28/11
to boost...@lists.boost.org
From: boost-use...@lists.boost.org
[mailto:boost-use...@lists.boost.org] On Behalf Of Eric J. Holtman
>Sent: January-28-11 11:07 AM

>To: boost...@lists.boost.org
>Subject: Re: [Boost-users] What's happened to Ryppl?
>
>On 1/28/2011 9:59 AM, Ted Byers wrote:
>
>>
>> This may be adequate IF you're working alone and if you have all the
>> time in the world, but it will become an unmaintainable nightmare when
>> the number of programmers contributing to the project increases
>> significantly and as time pressures grow. Imagine the chaos that
>> would result from this if you had a dozen programmers doing this
independantly to the same codebase.
>
>You proceed from a false assumption. You might have
>1000 commits in your local copy. When you're done, and ready to publish,
you "push" it as one.
>
>No different than SVN.
>
So no pressing need to change from SVN

>> commercial app that has half a million lines of code, or more, it is
>> just too easy to break things (there are practical and commercial
>> reasons for rational modularization as results in object oriented
>> programming, as well as proper management/design of compilation units,
&c.).
>>
>
>Again, no one sees your changes until your ready. It's there to help the
individual. How often have you been in the middle of a large change to
quite a few files, then said "oh, this sucks", and revert it. Then, a half
hour later, you >decide "Oh, wait, 100 of those lines (out of the 1000 you
just threw away) might be useful".
>
>With SVN, you're hosed. With git, you're not.

Really? Anyone with any experience has faced this sort of thing countless
times before, and even in the absense of software that makes handling this
easy, have developed methods (often manual) to deal with it. Only a
complete novice will not have figured this out and thus be so dependant on
his software that he'd be "hosed" if he uses the 'wrong' software. But
then, in a commercial setting, part of the role of the senior
programmers/team leads/&c. is to teach their juniors so that they are
flexible enough to cope with these situations regardless of the supporting
software used.

What I encourage among my team is to never throw anything away, unless it is
demonstrably wrong. So, even with something as primitive as RCS, I/we wold
not be "hosed". If git provides help with this, fine, but it is not
essential. My practice has been to tell my team members they can use
whatever tools they wish on their own machine. Some like Emacs, others vi,
&c. Some like commenting out code that appears problematic while others
include version info in temporary backups. I encourage them to work in
watever way they find conducive to being as productive as practicable, as
long as the code they produce works.

The actual decision making process used can be rather complex, and is not
simply a matter of comparing product feature lists in most cases. There is
first an analysis of what development model is required. Then, we need an
examination of features of available products and the extent to which each
supports the development model selected. If the product to be developed is
new, then one selects that product that best meets perceived needs (of
course, this involves putting each of the products to the test to verify
that each does what it says it does - it would be irresponsible to fail to
put each option through its paces). If the project involves extending or
refactoring an existing product, there is then an examination of the current
state of that product and the software used to support it, along with
everything that would be necessary to migrate it to use one or more new
tools. And then there is the question of the implications for anyone else
who works with the code. I would be quite annoyed if someone on some other
team made a decision that forced me and my team to change some of the
software or development processes we use because they adopted software that
is incompatible with what we had been using.

I have seen situations where one member of a team hyped one product
(generally a commercial library), spending months developing new code using
it, despite being repeatedly told of one or more deficiencies in it. His
argument was that the product is the best on the market (with some support
from the trade literature), and that he just needed a bit more time to
figure out how to address the apparent deificiencies. After months had been
spent (some would say wasted), he told us that there is no way to work
around the deficiencies in the product and that we had to just live with
them. I found a simpler product that DID meet all our needs, but it cost a
few months to refactor the code to use the simpler product that, as limited
as it was, met all our needs instead of the alleged 'best' product that did
not, despite the fact that other aspects of it worked fine and that it had a
huge number of features for which we had no need. The point, with an
existing product, is that it represents a whole suite of decisions that had
been made about design, and the various tools to be used to support it, and
any decision to replace one tool (or anything else) carries costs that must
be carefully evaluated and compared with alleged benefits.

To illustrate, comparing software to manage task lists and meeting
schedules, one has a wide range of products to examine from Google Calendar
through open source products like SugarCRM, to the various commercial CRM
products. At the one extreme, Google calendar is like a bicycle, with
SugarCRM being like a 10 year old Chevy, and the commercial products more
like a late model Mercedes. For some, Google Calendar is sufficient, and
works well. For others (perhaps most) something like SugarCRM would be
appropriate, and for others, with greater needs and deep pockets, one of the
commercial offerings may be preferred. But, if you already have extensive
data in SugarCRM, and you learn of a commecial offering that better meets
your needs, migrating all your data from the one to the other will not be
trivial, and may in fact make switching counterproductive, making hiring a
PHP programmer to extend SugarCRM instead a more rational option.

Please understand, I am not arguing the merits of git with you. Rather, I
am pointing out you haven't made the case that a change to use it instead of
SVN is either warranted or feasible or practicable. I have a number of
products in svn, and if I am to be convinced to use any other version
control software to manage their code instead of SVN, I'd need to be
presented with an analysis not only of what each option
(RCS,CVS,SVN,git,MKS, &c.) offers, but proof that each works as described,
that the benefits of making a switch outweigh the costs, and a viable plan
for making the switch. And all of this would have to be supported from well
documented experience. An argument comprised simply of a claim that product
'X' is the best available because it does 'Y' does not even come close.

Cheers

Ted

Eric J. Holtman

unread,
Jan 28, 2011, 1:16:01 PM1/28/11
to boost...@lists.boost.org
On 1/28/2011 12:12 PM, Ted Byers wrote:

>
> Please understand, I am not arguing the merits of git with you. Rather, I
> am pointing out you haven't made the case that a change to use it instead of
> SVN is either warranted or feasible or practicable.

I never made that case. I use SVN. I was just
pointing out that one of your objections was a
straw man.

John Maddock

unread,
Jan 28, 2011, 1:46:17 PM1/28/11
to boost...@lists.boost.org
>> The current approach implements the view of Boost as a
>> particular set of libraries ONLY built/tested/distributed as an
>> whole.
>>
>> My view is that is not scaling well and can never do so.
>
> +1
>
> Still, that doesn't mean we're going to be more nimble and scalable if
> there's no standardization of tools across Boost. Quite the contrary,
> IMO. I can imagine all kinds of problems coming up that are simply
> ruled out by using the same tools.

+1 from me, we must IMO have standardized tools - whatever we decide those
are - otherwise what you're proposing is the complete fragmentation of Boost
into something even more unmanageable than now.

I still haven't heard from the git proponents, what's wrong with using
git-svn to manage a local - i.e. distributed - git repository, and then
periodically pushing changes to SVN. In other words working with git just
as you normally would, except for having to type "git svn" from time to
time? This isn't a rhetorical question BTW, I've never used either git or
git-svn, so I clearly don't know what I'm missing ;-)

John.

PS, just looked at the git website, and it appears that us Windows users are
restricted to either Cygwin or MSys builds? If so that appears to be a
major drawback IMO.... OK I see there's a TortoiseGit, but it looks
distinctly immature at first glance, and still depends on MSys (i.e. no easy
integrated install)?

Dean Michael Berris

unread,
Jan 28, 2011, 1:49:00 PM1/28/11
to boost...@lists.boost.org
On Fri, Jan 28, 2011 at 11:59 PM, Ted Byers <r.ted...@gmail.com> wrote:
>>From: boost-use...@lists.boost.org
> [mailto:boost-use...@lists.boost.org] On Behalf Of Eric J. Holtman
>>Sent: January-28-11 8:56 AM
>>To: boost...@lists.boost.org
>>Subject: Re: [Boost-users] What's happened to Ryppl?
>>
>>
>>You can check in while you work.   Which means you
>>don't have to worry about "breaking the build", or anything like that.
>>
>>Write some code, test it, seems to work, check it in.  Come back after
> lunch, discover it's fubar, revert.  Lather, rinse repeat.
>
> This may be adequate IF you're working alone and if you have all the time in
> the world, but it will become an unmaintainable nightmare when the number of
> programmers contributing to the project increases significantly and as time
> pressures grow.  Imagine the chaos that would result from this if you had a
> dozen programmers doing this independantly to the same codebase.
>

One project to look at: Linux.

Time pressure? Couple of weeks to merge upstream.

Programmers contributing to the project? Thousands.

Programmers doing this independently on the same codebase? Absolutely.

'nuff said.

[snip tl;dr]

--
Dean Michael Berris
about.me/deanberris

Dean Michael Berris

unread,
Jan 28, 2011, 2:01:33 PM1/28/11
to boost...@lists.boost.org
On Sat, Jan 29, 2011 at 2:46 AM, John Maddock <boost...@virgin.net> wrote:
>
> I still haven't heard from the git proponents, what's wrong with using
> git-svn to manage a local - i.e. distributed - git repository, and then
> periodically pushing changes to SVN.  In other words working with git just
> as you normally would, except for having to type "git svn" from time to
> time?  This isn't a rhetorical question BTW, I've never used either git or
> git-svn, so I clearly don't know what I'm missing ;-)
>

This doesn't change the Boost central repo which is actually one of
the reasons why the current process doesn't scale well.

The idea really (partially hashed out here, still a work in progress:
https://svn.boost.org/trac/boost/wiki/DistributedDevelopmentProcess)
at least when I first brought it up is that we should be able to get
multiple distributions, allow the independent but coordinated
development of individual libraries, allow contributors to get into
the game easier, and rely on an organic web of trust to allow for
self-organization of sub-communities and a larger Boost community.

Git is part of that idea mostly because the barrier to entry for
potential contributors is 0. Anybody can absolutely clone the git
repository, get development going locally, adding their contributions
and submitting pull requests easily. The pull requests can go to
maintainers, co-maintainers, the mailing list at large, or someone
who's already a contributor to shepherd changes in.

This allows all the work to happen in a distributed manner, with
release management largely a matter of packaging publicly published
versions of libraries that are tested to work well together in a
single distribution. I just cannot imagine how this would be done with
anything other than git that integrates the web of trust, organic
fan-out growth of the self-organizing community, and rich set of tools
and practices supporting it. Of course it's not just the git thing,
it's also a workflow thing, and the distributed workflow along with
the distributed version control system go hand-in-hand.

> John.
>
> PS, just looked at the git website, and it appears that us Windows users are
> restricted to either Cygwin or MSys builds?  If so that appears to be a
> major drawback IMO.... OK I see there's a TortoiseGit, but it looks
> distinctly immature at first glance, and still depends on MSys (i.e. no easy
> integrated install)?

MSysGit is the best Git I've seen on Windows so far. I used it
extensively on the command-line and had 0 problems working with it
using the tutorials for Git on Linux.

YMMV though.

HTH

--
Dean Michael Berris
about.me/deanberris

Dave Abrahams

unread,
Jan 28, 2011, 3:03:07 PM1/28/11
to boost...@lists.boost.org
At Fri, 28 Jan 2011 13:12:06 -0500,

Ted Byers wrote:
>
> Please understand, I am not arguing the merits of git with you. Rather, I
> am pointing out you haven't made the case that a change to use it instead of
> SVN is either warranted or feasible or practicable. I have a number of
> products in svn, and if I am to be convinced to use any other version
> control software to manage their code instead of SVN, I'd need to be
> presented with an analysis not only of what each option
> (RCS,CVS,SVN,git,MKS, &c.) offers, but proof that each works as described,
> that the benefits of making a switch outweigh the costs, and a viable plan
> for making the switch. And all of this would have to be supported from well
> documented experience. An argument comprised simply of a claim that product
> 'X' is the best available because it does 'Y' does not even come close.

This is all way off-topic for the boost-users list. I'm not a
moderator here, but if I were, I'd be asking you to take it elsewhere.

--
Dave Abrahams
BoostPro Computing
http://www.boostpro.com

_______________________________________________

Edward Diener

unread,
Jan 28, 2011, 3:10:56 PM1/28/11
to boost...@lists.boost.org
On 1/28/2011 1:46 PM, John Maddock wrote:
>>> The current approach implements the view of Boost as a
>>> particular set of libraries ONLY built/tested/distributed as an
>>> whole.
>>>
>>> My view is that is not scaling well and can never do so.
>>
>> +1
>>
>> Still, that doesn't mean we're going to be more nimble and scalable if
>> there's no standardization of tools across Boost. Quite the contrary,
>> IMO. I can imagine all kinds of problems coming up that are simply
>> ruled out by using the same tools.
>
> +1 from me, we must IMO have standardized tools - whatever we decide
> those are - otherwise what you're proposing is the complete
> fragmentation of Boost into something even more unmanageable than now.
>
> I still haven't heard from the git proponents, what's wrong with using
> git-svn to manage a local - i.e. distributed - git repository, and then
> periodically pushing changes to SVN. In other words working with git
> just as you normally would, except for having to type "git svn" from
> time to time? This isn't a rhetorical question BTW, I've never used
> either git or git-svn, so I clearly don't know what I'm missing ;-)

The arguments of Git's superiority as a distributed VCS over SVN's
centralized VCS do not convince me either. I understand them but I
wonder if the switch from SVN to Git is worth it just so end-users can
make their own changes to a local Git repository and then push their
entire repository to a centralized location some time later. This is as
opposed to SVN users making periodic changes by committing to a
centralized SVN repository periodically. I just do not see the big deal
in the difference.

I do not see Boost's possible need to become less centralized and go
from a monolithic distribution to possible individual distributions as
dependent on using a distributed repository model versus a centralized
repository model. I believe many other issues are much more important,
as brought up by Robert Ramey and others.

I would much rather Boost have a discussion of those other issues than
focus on Git versus SVN, which I think of as just another red herring.

>
> John.
>
> PS, just looked at the git website, and it appears that us Windows users
> are restricted to either Cygwin or MSys builds? If so that appears to be
> a major drawback IMO.... OK I see there's a TortoiseGit, but it looks
> distinctly immature at first glance, and still depends on MSys (i.e. no
> easy integrated install)?

I have not looked at what it takes to build it from source, but
installing Tortoise Git on Windows is pretty easy from a binary
download. The documentation is not as good as Tortoise SVN amd leaves
much to be desired. There is a mailing list/Gmane NG for questions etc.

Dave Abrahams

unread,
Jan 28, 2011, 3:24:42 PM1/28/11
to boost...@lists.boost.org
At Fri, 28 Jan 2011 15:10:56 -0500,

Edward Diener wrote:
>
> I would much rather Boost have a discussion of those other issues than
> focus on Git versus SVN, which I think of as just another red herring.

Hi Edward,

Sounds like a good topic. Why don't you start that discussion over on
the Boost developers' list?

--
Dave Abrahams
BoostPro Computing
http://www.boostpro.com

_______________________________________________

Dean Michael Berris

unread,
Jan 28, 2011, 3:33:02 PM1/28/11
to boost...@lists.boost.org
On Sat, Jan 29, 2011 at 4:10 AM, Edward Diener <eldi...@tropicsoft.com> wrote:
> On 1/28/2011 1:46 PM, John Maddock wrote:
>>
>> I still haven't heard from the git proponents, what's wrong with using
>> git-svn to manage a local - i.e. distributed - git repository, and then
>> periodically pushing changes to SVN. In other words working with git
>> just as you normally would, except for having to type "git svn" from
>> time to time? This isn't a rhetorical question BTW, I've never used
>> either git or git-svn, so I clearly don't know what I'm missing ;-)
>
> The arguments of Git's superiority as a distributed VCS over SVN's
> centralized VCS do not convince me either. I understand them but I wonder if
> the switch from SVN to Git is worth it just so end-users can make their own
> changes to a local Git repository and then push their entire repository to a
> centralized location some time later. This is as opposed to SVN users making
> periodic changes by committing to a centralized SVN repository periodically.
> I just do not see the big deal in the difference.
>

I think you're looking at it as a purely tool vs tool comparison which
doesn't amount to much. Consider then what the workflow a distributed
version control system enables and you might see the difference
clearer.

Consider a library being worked on by N different people concurrently.
Each one can work on exactly the same code locally, making their
changes locally. Then say someone pushes their changes to the
"canonical" repository. Each person can then pull these changes
locally, stabilizing their own local repository, and fixing things
until it's stable. You can keep doing this every time without any one
of these N people waiting on anybody to "finish". Now then imagine
that there's only one person who has push capabilities/rights to that
"canonical" repository and that person's called a maintainer.

All the N-1 people then ask this maintainer to pull changes in or
merge patches submitted by them. If the maintainer is willing and
capable, that's fine and dandy changes get merged. Now consider when
maintainer is unwilling or incapable, what happens to the changes
these N-1 people make? Simple, they publish their repository somewhere
accessible and all the N-2 people can congregate around that
repository instead. MIA maintainer out of the way, release managers
can choose to pull from someone else's published version of the
library. Easy as pie.

Explain to me now then how you will enable this kind of workflow with
a centralized SCM.

> I do not see Boost's possible need to become less centralized and go from a
> monolithic distribution to possible individual distributions as dependent on
> using a distributed repository model versus a centralized repository model.
> I believe many other issues are much more important, as brought up by Robert
> Ramey and others.
>

How about that try above?

> I would much rather Boost have a discussion of those other issues than focus
> on Git versus SVN, which I think of as just another red herring.
>

How about the workflow, is that something you'd like to see discussed as well?

--
Dean Michael Berris
about.me/deanberris

Robert Ramey

unread,
Jan 28, 2011, 4:35:19 PM1/28/11
to boost...@lists.boost.org
Dave Abrahams wrote:

> Except that there are interdependencies among some of the libraries.
> How many build tools should you need in order to install
> Boost.Serialization?

Currently boost serialization needs only one tool to build/test - Bjam.
The rest of the dependcies are header only. I don't know, but
boost serialization might be buildable/testable with CTest. I
don't think that's a huge hill - certainly not greater than the
current situation.

(aside, starting in 1.46, testing the serialization library now
also depends onthe filesystem library which is also compiled)

>> I see the centralized functions being limited to:
>> a) reviews/certification
>> b) accumulation of testing results
>> c) coordination/maintainence of standards (a-d above)
>> d) promotion of developer practices compatible with
>> the above (licenses, etc).
>>
>> Suppose such an environment existed today. The whole
>> issue of moving to git wouldn't be an issue. Each library
>> author could use which ever system he preferred. Movement
>> to git could proceed on a library by library basis if/when
>> other developers were convinced it was an improvement.
>> It would be one less thing to spend time on.
>
> Or, it could be one more thing to spend time on.
>
> Standardization != coordination, and while coordination can slow
> things down, standardization brings efficiencies.

It is unbelievable painful for me to say this, but I think we're in
agreement here.

I'm proposing that we "standardize" things like namespaces,
directly structure, testability requierments, documentation requirements,
platform support requirements, etc. BUT that we try to move away
from having to coordinate so closely - e.g. making a giant release on
a specific date.

Robert Ramey

Mostafa

unread,
Jan 28, 2011, 7:34:12 PM1/28/11
to boost...@lists.boost.org
On Fri, 28 Jan 2011 13:35:19 -0800, Robert Ramey <ra...@rrsd.com> wrote:

> I'm proposing that we "standardize" things like namespaces,
> directly structure, testability requierments, documentation requirements,
> platform support requirements, etc.

+1 As a mere user of boost, I believe such client-facing standardization
will go a long way in easing the adoption and learning of boost libraries.

Mostafa

Scott McMurray

unread,
Jan 28, 2011, 8:02:02 PM1/28/11
to boost...@lists.boost.org
On Fri, Jan 28, 2011 at 07:59, Ted Byers <r.ted...@gmail.com> wrote:
>
> I would prefer a model where there is a highly developed suite of test code
> (actually, I find it is often best if one begins with the tests first - but
> sometimes that is not practical), unit tests, integration tests and
> usability tests, and nothing gets checked in unless the code base plus the
> new code not only compiles, but the developer can show that with his
> implemented changes, the system still passes all tests.  [...] In my practice, no

> member of my team commits anything until we know it works with everything
> already in the repository - no exceptions.  This means that sometimes a
> programmer will work on an assigned task for days, or even a week, without
> commiting changes to the repository.
>

In such an environment, a DCVS is *more* useful, not less. Your
process overloads commits with an extra meaning of validation. With a
DCVS, team members commit as often as is helpful for their personal
work, without affecting others. Then the validation can be run in
parallel, and only pushed to the anointed repo once the validation
passes. It avoid the downtime imposed by requiring code review or
CITs that occurs when you can't check things in because they haven't
passed the process, but nor can you keep working on things since
there's no nice way to commit something other than your working copy.
(In git, for example, you've checked in the changes, and all you need
to send is your box's name and the hash of the commit to others -- be
it the code review alias, the automated test suite runner, or the
person responsible for the official build.)

I've spent time trying to work with another dev on a feature at a
company with a strict "code review before checkin" policy, and the
lack of source control -- since you can't check anything in anywhere
-- makes it a terrible experience.

Ravi

unread,
Jan 29, 2011, 12:43:07 AM1/29/11
to boost...@lists.boost.org
[This reply does not advocate moving boost to git.]

On Friday 28 January 2011 10:46:17 John Maddock wrote:
> I still haven't heard from the git proponents, what's wrong with using
> git-svn to manage a local - i.e. distributed - git repository, and then
> periodically pushing changes to SVN. In other words working with git just
> as you normally would, except for having to type "git svn" from time to
> time? This isn't a rhetorical question BTW, I've never used either git or
> git-svn, so I clearly don't know what I'm missing ;-)

The strength of git (and other DVCSs) comes from the ability to create
essentially unlimited local branches and share/merge said branches with local
branches of other parties. git-svn does not work with this model if you want
the ability to push changes into SVN.

The fundamental problem with git-svn
------------------------------------
One cannot clone a git repository (created with git-svn) and then use the
resulting repository to contribute to the original SVN repository without the
use of hacks.

Two points from the git-svn manual:

1. git clone does not clone branches under the refs/remotes/ hierarchy or any
git svn metadata, or config. So repositories created and managed with using
git svn should use rsync for cloning, if cloning is to be done at all.

2. For the sake of simplicity and interoperating with a less-capable system
(SVN), it is recommended that all git svn users clone, fetch and dcommit
directly from the SVN server, and avoid all git clone/pull/merge/push
operations between git repositories and branches.

The two points above essentially make git-svn worthless for use as a
*distributed* VCS since cloned repositories are neither equally capable nor
can they be used to follow the develop/push/pull methodology of DVCSs.

The strength of DVCSs is directly related to (a somewhat convoluted, but very
useful definition of) zero-barrier to entry: any changes you make in your
clone are equal in terms of publishability with those in anyone else's clone;
git-svn cannot handle this model because SVN cannot handle it.

Hope this helps.

Regards,
Ravi

Anthony Foiani

unread,
Jan 29, 2011, 2:37:52 AM1/29/11
to boost...@lists.boost.org

Edward, all, greetings --

Edward Diener <eldi...@tropicsoft.com> writes:

> Why do you mean by "incremental checkins" ? If I use SVN I can make
> as many changes locally as I want.

Hopefully others have clarified this point for us, but to be perfectly
clear:

What I mean by "incremental checkins" is that I can make changes on my
local machine *and save each change in my local history*. When I push
these out to the world at large, I can either keep the history the way
it is, or condense multiple "bite-sized" check-ins to a single
"plate-sized" feature addition.

E.g., if I make changes A, B, C, and D locally, I would commit
*locally* between each change. If it turns out that commit B was
incorrect or sub-optimal, I can use my local repo to do the equivalent
of:

revert D
revert C
revert B
apply C
apply D

As others have pointed out, there is a huge difference between "local
change" and "tracking local commits". git and hg both provide the
latter feature.

> "More distributed" [w.r.t. data backup] means nothing to me. Someone
> really needs to justify this distributed development idea with
> something more than "its distributed so it must be good".

[This is not meant in any way to disparage the current boost
infrastructure maintainers; when I ask a question, please read it as
"how would I fill out a form" and not "I assume they haven't taken
care of this".]

What's the backup plan for the boost SVN repository? Who maintains
it? What is the replication factor? Offsite backups? How often is
restore capability verified? Are all backups checksummed?

With distributed repositories, every developer has a complete copy of
the entire (public) history of the project as well as any local
changes they have made.

Verification of backup/restore capability is given by the fact that
it's done via the exact same operations that are required in everyday
development.

In both git and hg, all content is implicitly checksummed (by virtue
of content being addressed primarily by SHA1, at least in git).

(This isn't quite as ballsy as Linus pointing out that he gets away
with just uploading tarballs, with his backup being taken care of by
the many thousands that download his releases...)

> Please explain "cheaper and lighter weight" ?

Please note, this might be from my inexperience, but I've found that
the only effective way to work on a "private branch" in SVN is to
check out the branch I care to modify in a separate directory.

As an example of SVN making things painful, I'm working on a minor fix
inside GCC. I had to check out the trunk (1.8GiB on disk, no idea how
much over the network). Then I checked out the current release tag,
another 1.8GiB of network traffic.

Compare with a project that is distributed via mercurial. In this
case, I have the trunk and two private branches, each for a different
feature. The original checkout of trunk was about as expensive as for
SVN; after that, though, I could do a local "clone" to get my private
feature-development branches.

> It is all this rhetoric that really bothers me from developers on


> the Git bandwagon. I would love to see real technical proof.

My apologies if my original post across as propoganda; I was just
trying to communicate what I found to be the distinct "in the real
world" advantages of using a DVCS (git or hg) over a centralized VCS
(SVN).

(Granted, I suppose it's possible to have a *local* SVN server that
one could use to do much of the same work as indicated above. I have
no idea how painful that might be, though, and since the current
leading DVCSs already solve the problem for me, I'm disinclined to try
to find out.)

And while my sympathies are primarily with git, I have (and do) work
with projects that use hg, and I find them both vastly more pleasant
than svn. I even contributed a trivial doc patch to hg; I found the
learning curve, tool usage, and community response all incredibly
pleasant.

Best regards,
Tony

Vladimir Prus

unread,
Jan 29, 2011, 2:45:10 AM1/29/11
to boost...@lists.boost.org
Dean Michael Berris wrote:

> Consider a library being worked on by N different people concurrently.
> Each one can work on exactly the same code locally, making their
> changes locally. Then say someone pushes their changes to the
> "canonical" repository. Each person can then pull these changes
> locally, stabilizing their own local repository, and fixing things
> until it's stable. You can keep doing this every time without any one
> of these N people waiting on anybody to "finish". Now then imagine
> that there's only one person who has push capabilities/rights to that
> "canonical" repository and that person's called a maintainer.
>
> All the N-1 people then ask this maintainer to pull changes in or
> merge patches submitted by them. If the maintainer is willing and
> capable, that's fine and dandy changes get merged. Now consider when
> maintainer is unwilling or incapable, what happens to the changes
> these N-1 people make? Simple, they publish their repository somewhere
> accessible and all the N-2 people can congregate around that
> repository instead. MIA maintainer out of the way, release managers
> can choose to pull from someone else's published version of the
> library. Easy as pie.
>
> Explain to me now then how you will enable this kind of workflow with
> a centralized SCM.

Private branches existed in all SCMs since, like, forever. As repeatedly
mentioned before, all you are talking above is a process matter, not a tool
matter.

- Volodya

--
Vladimir Prus
Mentor Graphics
+7 (812) 677-68-40

Anthony Foiani

unread,
Jan 29, 2011, 3:07:31 AM1/29/11
to boost...@lists.boost.org

Ted, all: greetings --

"Ted Byers" <r.ted...@gmail.com> writes:
> Really? Anyone with any experience has faced this sort of thing
> countless times before, and even in the absense of software that
> makes handling this easy, have developed methods (often manual) to
> deal with it.

This is a key observation.

Having said that, would rather have generation after generation of
"complete novice" programmers have to climb this wall -- each in their
own way, with their own tools, using their own toolset, and their own
assumptions -- or would you prefer to switch to a tool that solves the
problem for you (and them)?

You're completely correct that it's quite trivial to generate local,
offline snapshots of an SVN tree. Here's mine:

$ cat snapshot-dir.sh
#!/bin/bash
#
# snapshot-dir.sh
#
# This is mostly useful when we can't get to a version control server,
# but still want to capture a particular edit state. Switching to git
# (which is local by design) is probably really the right answer, but
# for now...

ts=$(date +'%Y%m%d%H%M%S')

for d in $*
do
echo "=== $d ==="
tar --create \
--gzip \
--verbose \
--file "saved-snaps/$d-$ts.tar.gz" \
--exclude ".svn" \
"$d"
done

But you know what? Integrating those local snapshots, once I was back
on the network, was quite a bit more effort than I'd like. The
difference between a home-brew snapshot script (and manual merging),
and a tool that is designed to support this style of work, is huge.

Would you rather your programmers spend time writing tools like this,
or would you prefer they utilize toolkits that give them the features
they need up-front?

> But then, in a commercial setting, part of the role of the senior
> programmers/team leads/&c. is to teach their juniors so that they
> are flexible enough to cope with these situations regardless of the
> supporting software used.

Another role of the "senior programmers", however, is to step back and
review their *process* occasionally.

That's a big part of what this thread is about; the contributors to
boost are by no means "novice" programmers.

> What I encourage among my team is to never throw anything away,
> unless it is demonstrably wrong. So, even with something as
> primitive as RCS, I/we wold not be "hosed". If git provides help
> with this, fine, but it is not essential.

How do they "never throw anything away"? How are those changes
tracked, and do they have contemporaneous comments that would make
suitable commit logs?

Put another way, how many possibly-useful changes are sitting in your
developers' working directories, under names like "third-try" and
"oops"?

> Please understand, I am not arguing the merits of git with you.
> Rather, I am pointing out you haven't made the case that a change to
> use it instead of SVN is either warranted or feasible or
> practicable.

Totally agreed.

In this case, I would venture that there are some non-arguable facts:

1. Boost development is decentralized.

This seems obvious on the surface. Multiple companies,
programmers, countries, and even timezones.

2. Boost release coordination is centralized.

This is a good thing! There is one focus for naming, QA, praise,
and complaints.

3. Any DVCS can trivially emulate any given centralized VCS.

4. The DVCSs under discussion (hg and git) have both been proven
workable (and, in general, superior to centralized VCSs and simple
email patch-passing) by some very large, very popular, and
many-contributor projects.

> I have a number of products in svn, and if I am to be convinced to
> use any other version control software to manage their code instead
> of SVN, I'd need to be presented with an analysis not only of what
> each option (RCS,CVS,SVN,git,MKS, &c.) offers, but proof that each
> works as described, that the benefits of making a switch outweigh
> the costs, and a viable plan for making the switch.

This is a laudable goal, but proper "proof" consists of running both
scenarios in parallel (providing a control group). Boost being a
volunteer effort, I posit that it is impossible to "prove" that a
particular change in methodology yields a specified cost/benefit
ratio.

Lacking rigorous proof, we can instead look to the experiences of
similar projects. Of those projects that are large and decentralized,
it seems that the choice to use a DVCS has been largely accepted.

I would suggest following their lead.

Best regards,
Tony

Dave Abrahams

unread,
Jan 29, 2011, 4:31:58 AM1/29/11
to boost...@lists.boost.org
On Fri, Jan 28, 2011 at 4:35 PM, Robert Ramey <ra...@rrsd.com> wrote:
>> Standardization != coordination, and while coordination can slow
>> things down, standardization brings efficiencies.
>
> It is unbelievable painful for me to say this,  but I think we're in
> agreement here.

Yeah, it's actually giving me a headache and a nasty case of bursitis.

> I'm proposing that we "standardize" things like namespaces,
> directly structure, testability requierments, documentation requirements,
> platform support requirements, etc.

and, I'm suggesting, tools such as VCSes, Build/test infrastructure, etc.

> BUT that we try to move away
> from having to coordinate so closely - e.g. making a giant release on
> a specific date.

i-think-i-may-have-slipped-a-disc-too-ly y'rs

--
Dave Abrahams
BoostPro Computing
http://www.boostpro.com

John Maddock

unread,
Jan 29, 2011, 5:08:47 AM1/29/11
to boost...@lists.boost.org
> I think you're looking at it as a purely tool vs tool comparison which
> doesn't amount to much. Consider then what the workflow a distributed
> version control system enables and you might see the difference
> clearer.
>
> Consider a library being worked on by N different people concurrently.
> Each one can work on exactly the same code locally, making their
> changes locally. Then say someone pushes their changes to the
> "canonical" repository. Each person can then pull these changes
> locally, stabilizing their own local repository, and fixing things
> until it's stable. You can keep doing this every time without any one
> of these N people waiting on anybody to "finish".

That's exactly what we do now with SVN.

> Now then imagine
> that there's only one person who has push capabilities/rights to that
> "canonical" repository and that person's called a maintainer.
>
> All the N-1 people then ask this maintainer to pull changes in or
> merge patches submitted by them. If the maintainer is willing and
> capable, that's fine and dandy changes get merged. Now consider when
> maintainer is unwilling or incapable, what happens to the changes
> these N-1 people make? Simple, they publish their repository somewhere
> accessible and all the N-2 people can congregate around that
> repository instead. MIA maintainer out of the way, release managers
> can choose to pull from someone else's published version of the
> library. Easy as pie.

OK, if forking is a good thing then I can see how that helps.

Question: what's to stop you from right now, building a better and greater
version of library X in the sandbox, and then asking the Boost community to
consider that the new Trunk and you the new maintainer? Different tool,
same process.

I still think there are pros and cons though:

* As I see it Git encourages developers to keep their changes local for
longer and then merge when stable. That's cool, and I can see some
advantages especially for developers wanting to get involved, but I predict
more work for maintainers of the canonical repro trying to figure out how to
resolve all those conflicts. Obviously with SVN we still get conflicts -
for example Paul and I often step on each others toes editing the Math lib
docs - but these issues tend to crop up sooner rather than later which at
least makes the issue manageable to some level.
* I happen to like the fact that SVN stores things *not on my hard drive*,
it means I just don't have to worry about what happens if my laptop goes
belly up, gets lost, stolen, dropped, or heaven forbid "coffeed". On the
other hand the "instant" commits and version history from a local copy would
be nice...

Regards, John.

Dean Michael Berris

unread,
Jan 29, 2011, 6:02:01 AM1/29/11
to boost...@lists.boost.org
On Sat, Jan 29, 2011 at 6:08 PM, John Maddock <boost...@virgin.net> wrote:
>> I think you're looking at it as a purely tool vs tool comparison which
>> doesn't amount to much. Consider then what the workflow a distributed
>> version control system enables and you might see the difference
>> clearer.
>>
>> Consider a library being worked on by N different people concurrently.
>> Each one can work on exactly the same code locally, making their
>> changes locally. Then say someone pushes their changes to the
>> "canonical" repository. Each person can then pull these changes
>> locally, stabilizing their own local repository, and fixing things
>> until it's stable. You can keep doing this every time without any one
>> of these N people waiting on anybody to "finish".
>
> That's exactly what we do now with SVN.
>

Something was missing in translation there: making changes, means
committing locally. That's not what we do now with SVN.

See, when you're using git, committing locally and making changes are
equivalent. It's something you don't think about as something
different -- as opposed to how you think about it in SVN where "making
changes" is not equivalent to "committing". Therefore were you see git
users say 'making changes' that really means 'committing changes
locally'.

>> Now then imagine
>> that there's only one person who has push capabilities/rights to that
>> "canonical" repository and that person's called a maintainer.
>>
>> All the N-1 people then ask this maintainer to pull changes in or
>> merge patches submitted by them. If the maintainer is willing and
>> capable, that's fine and dandy changes get merged. Now consider when
>> maintainer is unwilling or incapable, what happens to the changes
>> these N-1 people make? Simple, they publish their repository somewhere
>> accessible and all the N-2 people can congregate around that
>> repository instead. MIA maintainer out of the way, release managers
>> can choose to pull from someone else's published version of the
>> library. Easy as pie.
>
> OK, if forking is a good thing then I can see how that helps.
>

Is there any question that forking is a good thing? I thought that was
kinda assumed with open source development. ;-)

> Question: what's to stop you from right now, building a better and greater
> version of library X in the sandbox, and then asking the Boost community to
> consider that the new Trunk and you the new maintainer?  Different tool,
> same process.
>

Because doing that requires permission to get sandbox access. And
because doing that is largely more steps than just clicking a 'fork'
button on a web UI on something like github. And also because doing
that means that you have to work with a single repository that has
potentially other people clobbering it. :)

> I still think there are pros and cons though:
>
> * As I see it Git encourages developers to keep their changes local for
> longer and then merge when stable.  That's cool, and I can see some
> advantages especially for developers wanting to get involved, but I predict
> more work for maintainers of the canonical repro trying to figure out how to
> resolve all those conflicts.

What gives the impression that resolving conflicts is hard on git?
It's easily one of the easiest things to do with git along with
branching. And because branching is so light-weight in git (meaning
you don't have to pull the branch everytime you're switching between
branches on your local repo) these conflict resolution and
feature-development isolation is part of the daily work that comes
with software development on Git.

And having multiple maintainers maintaining a single "canonical" git
repo is the sweetest thing ever. Merging changes from many different
sources into a single "master" is actually *fun* as opposed to painful
with a centralized VCS.

> Obviously with SVN we still get conflicts -
> for example Paul and I often step on each others toes editing the Math lib
> docs - but these issues tend to crop up sooner rather than later which at
> least makes the issue manageable to some level.

See, imagine how that would scale if you added 2 more people working
on the same library with SVN. Just updating everytime you need to
commit anything is a pain with the potentially huge conflicts you get
precisely because you can't commit changes more granularly locally in
your repository. Note that there's no notion of a "working copy"
because your local repository is what you work on directly.

The "pull-merge-push" workflow is so simple with git that it's largely
not something you *ever* have to deal with in any special manner. It's
just part of everyday development with git.

A suggestion would be maybe someone ought to run a workshop or a
tutorial IRL on how the git workflow looks like. I think there are
tons of videos out there already along with countless books written on
the subject already.

> * I happen to like the fact that SVN stores things *not on my hard drive*,
> it means I just don't have to worry about what happens if my laptop goes
> belly up, gets lost, stolen, dropped, or heaven forbid "coffeed".  On the
> other hand the "instant" commits and version history from a local copy would
> be nice...
>

See, git does the same thing if you're using github as a publicly
accessible repo. You can duplicate the same to gitorious. You can even
put it on sourceforge for good measure. Synchronizing each one is a
scriptable manner and is not rocket science. The fact that this is
even possible with git is something that gives it much more appeal for
disaster recovery.

With SVN, if you're working on something locally (not committed yet)
and your hard-drive goes belly up, I don't see why it's better than if
you were working on git and have local commits and your hard-drive
goes belly up. With SVN though the question is what happens when your
server gets wiped out, what value is the data on your hard-drive then?
How do you reconstitute the history of the project from what you do
have with you? With git the risk mitigation options are a lot more
accessible and largely trivial. With SVN, not so much.

HTH

--
Dean Michael Berris
about.me/deanberris

Mathieu -

unread,
Jan 29, 2011, 7:27:33 AM1/29/11
to boost...@lists.boost.org
On Sat, Jan 29, 2011 at 6:08 PM, John Maddock <boost...@virgin.net> wrote:

> * I happen to like the fact that SVN stores things *not on my hard drive*,
> it means I just don't have to worry about what happens if my laptop goes
> belly up, gets lost, stolen, dropped, or heaven forbid "coffeed".  On the
> other hand the "instant" commits and version history from a local copy would
> be nice...

Sorry but I don't like that actually. Servers (here the boost
repository) are also
likely to crash/burn whatever (oh and yes there are backups but...
I've seen backup
destroyed before).
So here it is what happen with git : your laptop dies, the main
repository dies? That's
not really important as thousands of user all over the world have
copies of this repo.
That's much more safe than just one central server. (actually that
could be compared to
thousands of backups).

Klaim

unread,
Jan 29, 2011, 9:42:23 AM1/29/11
to boost...@lists.boost.org
Hi

Edward Diener <eldi...@tropicsoft.com> writes:
> It is all this rhetoric that really bothers me from developers on
> the Git bandwagon. I would love to see real technical proof.


I'm not sure it will be of  great help to you or others,
but this page is meant to explain to people used to SVN 
the differences with decentralized source control tools and why it might be worth :
http://hginit.com/00.html (it's about mercurial/hg but the decentralized way of doing things
is globally the same with git and bazaar).

Hope it helps understand the requests.

Joël Lamotte.

Dave Abrahams

unread,
Jan 28, 2011, 10:06:10 AM1/28/11
to boost...@lists.boost.org
On Fri, Jan 28, 2011 at 8:53 AM, Eric J. Holtman <er...@holtmans.com> wrote:
> The developer's list isn't restricted access, is it?

No it is not. If you're interested in this sort of issue, please
subscribe there.

--
Dave Abrahams
BoostPro Computing
http://www.boostpro.com

Robert Ramey

unread,
Jan 29, 2011, 12:57:33 PM1/29/11
to boost...@lists.boost.org
Dave Abrahams wrote:
> On Fri, Jan 28, 2011 at 4:35 PM, Robert Ramey <ra...@rrsd.com> wrote:

In the interest of moving toward a better modularization of boost
and decoupling of libraries, I would like to make a suggestion:

Can we change library testing so that each library is tested against the
current release branch of all the other libraries?

When I test on my own machine, I don't test against the trunk. I
test against the current release tree. Another way of saying this is
that on my current machine I have my directory tree set to the
the boost release branch. ONLY the serialization library directories
are set to the trunk. So when I test on my local machine I KNOW that when I
merge
into the release branch I won't have any unexpected problems.

a) I don't think this would be a huge change.
b) It would better test what the user does
c) It would isolate each library from another so that
errors in the trunk (wild west) in one library don't impact
on development of other libaries.
d) It would promote decoupling of libraries.
e) as an intermediate step, not all testers would have
to make the change - some could use the old script.

It would be a small, not too difficult step on the path
we think we want to travel.

Robert Ramey

Klaim

unread,
Jan 29, 2011, 2:00:21 PM1/29/11
to boost...@lists.boost.org
Hi,
I'm not a boost contributor (yet) and maybe can be considered novice so take my advice knowing that.


On Sat, Jan 29, 2011 at 11:08, John Maddock <boost...@virgin.net> wrote:
* I happen to like the fact that SVN stores things *not on my hard drive*, it means I just don't have to worry about what happens if my laptop goes belly up, gets lost, stolen, dropped, or heaven forbid "coffeed".  On the other hand the "instant" commits and version history from a local copy would be nice...

I have the same kind of concerns. 
I've only started using Mercurial (hg) in the middle of the last year and I'm not an expert but the decentralized way of doing things changes a lot my point of view. (I used SVN before)

The decentralized nature of the repositories moves the organization responsability from the tool (like in SVN) to the user (like in any DSVC). What I mean is that as every developer have a full repository, or several clones, the communication of changes between repositories is only driven by the contributors/team organisation.

For this specific point, habing a non-local clone of your work, here is what I'm doing for all my projects now:

1. On my laptop, that I use in transit, I have one repository of my project. You could say it's my "trunk", comming from the SVN world.

2. On the same laptop, I have several clones of the 1. repo. In each, I work on experimental features that I often just delete after some tests. If the feature is good enough, I "push" the changes (the commits that have been done in this local repo) in the repo 1..  That means I have to merge before sometimes as I still work on 1. for the main features, and I kind of work like if I was a big team alone. Anyway, I already can fork for myself with decentralized repositories, allowing me to experiment more easily.

3. I have another repo on my home desktop. That's a clone from 1. with some other works that require me to have several computer screens available, so I work on those features mainly on my desktop. In fact I swith from laptop to desktop often, and often make sure both are synchronised. I don't know about how it's done in git but hg (using tortoise hg or not) can setup an http server for you that will listen for pull and push requests. So it's super-easy to just transfer changes between my computers, from 1. or 2. or 3., to any one of them. In fact, I start to build some kind of hierarchical relation between my repos (that becomes natural when you understand the potential of "cloning"/"forking"). So I have two computers as various backups.

4. I don't trust my hardware very much, and sometimes I need to have changes from my desktop in my laptop but I'm too far from it, like in another city. So, I also keep another repo in my online server (an ubuntu box that I loan mostly for websites and SVN repo, nothing special). In this repo I push changes from laptop and desktop that are not experimental. 
By the way, hg works with ssh so it's incredible to see that you just have to clone a repo to make it available online via ssh (if it's not public, or with more security if it is). You don't have to launch any server (other than the one to manage ssh). 
My online server is kind of a more secure backup that is available from anywhere.

5. I also have other repos on my servers that are for my personal work that is experimental but takes a lot of time. So I still have an online available backup.

6. I have some friends that want to work with me on my project. I've setup clones for them on my private server, one for each user. They can work however they want on their local computers and simply push their changes in my private server, on their dedicated repository on my server. When they think they made works that can be used and is complete, and is all pushed in their repo, they mail me to make me review their changes, comment and if all is fine then pull the changes in the "truth" repo that merge all the team effort. That truth repo is in fact a clone of mine that I call the "team" repo. But anyway I can call it what I like, it's just an organisation matter. Like setting up the "graph" of your team :)

7. In fact, some of my projects are open-source (not worth showing here and it's not the point) so I have additional repositories in bitbucket.org and google code hosting. That way, anyone can do like any teammate from my private projects. It's just that the repository is available publicly in read-only access. To "write", people have to ask me to review their code, then if I htink it's worth and follow my standards for the project, I pull the changes and add a line somewhere about who contributed to what. For open source project, public repos are my "trust" repos, where I pull all work that is valid and finished. Note that bitbucket and github allows you with "one click" as pointed by Dean to clone a repo. That allows you to work on your version for a time, pull public changes while you still work on your features, then ask for the maintainer of the truth repo to get your changes. Or not. 

8. I don't trust all hosting services, so I've setup scripts that pull all public and private changes in separate backup repos. 

9. I didn't even started to talk about branching. Branching is kept in the repositories history, while forking is not. I have branches for features that are required, not experimental, but are long to implement. 

So, for the security of backup problems, it diseapear because it's easier to setup any dsvc hosting than with SVN (because there is no need for a server to listen). 
Some other points in my description might help to understand why decentralised changes everything when coming from SVN.

Someone (Dean?) did that analogy some time ago : SVN/CVS are like mutexes over a container while DSVC are like lockfree containers.
That's, I think, the most accurate analogy of the differencies between the two. 

I have not used DSVC for years because I thought that it was not worth it in team environnement, "because" of the decentralized nature. In fact I tried only because it was good for one person project in my mind. I must say I was totally wrong, it's the exact opposite. 

That said, I'm juste a "junior", so again take my experience as such.

Joël Lamotte

Steven Watanabe

unread,
Jan 29, 2011, 3:20:37 PM1/29/11
to boost...@lists.boost.org
AMDG

On 1/28/2011 11:37 PM, Anthony Foiani wrote:
>> Please explain "cheaper and lighter weight" ?
>
> Please note, this might be from my inexperience, but I've found that
> the only effective way to work on a "private branch" in SVN is to
> check out the branch I care to modify in a separate directory.
>

You can use svn switch to point your working
copy at a different branch.

In Christ,
Steven Watanabe

Steven Watanabe

unread,
Jan 29, 2011, 3:33:29 PM1/29/11
to boost...@lists.boost.org
AMDG

On 1/29/2011 3:02 AM, Dean Michael Berris wrote:
> On Sat, Jan 29, 2011 at 6:08 PM, John Maddock<boost...@virgin.net> wrote:
>> * As I see it Git encourages developers to keep their changes local for
>> longer and then merge when stable. That's cool, and I can see some
>> advantages especially for developers wanting to get involved, but I predict
>> more work for maintainers of the canonical repro trying to figure out how to
>> resolve all those conflicts.
>
> What gives the impression that resolving conflicts is hard on git?

Nothing except that I do not trust any automated tool,
no matter how smart it is, to do the merge correctly
without manual review of every change. The tool has
no knowledge of the semantics of what its merging.

> It's easily one of the easiest things to do with git along with
> branching. And because branching is so light-weight in git (meaning
> you don't have to pull the branch everytime you're switching between
> branches on your local repo) these conflict resolution and
> feature-development isolation is part of the daily work that comes
> with software development on Git.
>
> And having multiple maintainers maintaining a single "canonical" git
> repo is the sweetest thing ever. Merging changes from many different
> sources into a single "master" is actually *fun* as opposed to painful
> with a centralized VCS.
>

What does this have to do with whether the
repository is centralized or distributed?

In Christ,
Steven Watanabe

Matthieu Brucher

unread,
Jan 29, 2011, 3:45:12 PM1/29/11
to boost...@lists.boost.org

And having multiple maintainers maintaining a single "canonical" git
repo is the sweetest thing ever. Merging changes from many different
sources into a single "master" is actually *fun* as opposed to painful
with a centralized VCS.


What does this have to do with whether the
repository is centralized or distributed?

Merging is natural for a decentralized tool, it's not the case for centralized. The best proof is Subversion which cannot handle correctly until a short while history merges.

Matthieu
--
Information System Engineer, Ph.D.
Blog: http://matt.eifelle.com
LinkedIn: http://www.linkedin.com/in/matthieubrucher

Dave Abrahams

unread,
Jan 29, 2011, 4:50:45 PM1/29/11
to boost...@lists.boost.org
On Sat, Jan 29, 2011 at 3:33 PM, Steven Watanabe <watan...@gmail.com> wrote:
>> And having multiple maintainers maintaining a single "canonical" git
>> repo is the sweetest thing ever. Merging changes from many different
>> sources into a single "master" is actually *fun* as opposed to painful
>> with a centralized VCS.
>>
>
> What does this have to do with whether the
> repository is centralized or distributed?

The fact that you can quickly try doing it several different ways
without affecting the "official repo" is a big plus. There's no
reason anyone should take my word for this, but I didn't really "get
it" about DVCSes until I actually tried using Git for a while.
Something about it changes the user experience drastically in ways
that are simply not obvious until you've gotten used to it.

--
Dave Abrahams
BoostPro Computing
http://www.boostpro.com

Dave Abrahams

unread,
Jan 29, 2011, 5:02:34 PM1/29/11
to boost...@lists.boost.org
Robert, great questions. Could you post them to the boost developers'
list? They really don't belong here.

Thanks.

--

Dave Abrahams
BoostPro Computing
http://www.boostpro.com

Steven Watanabe

unread,
Jan 29, 2011, 5:22:47 PM1/29/11
to boost...@lists.boost.org
AMDG

On 1/29/2011 1:50 PM, Dave Abrahams wrote:
> On Sat, Jan 29, 2011 at 3:33 PM, Steven Watanabe<watan...@gmail.com> wrote:
>>> And having multiple maintainers maintaining a single "canonical" git
>>> repo is the sweetest thing ever. Merging changes from many different
>>> sources into a single "master" is actually *fun* as opposed to painful
>>> with a centralized VCS.
>>>
>>
>> What does this have to do with whether the
>> repository is centralized or distributed?
>
> The fact that you can quickly try doing it several different ways
> without affecting the "official repo" is a big plus.

I don't understand. Quickly try doing what?

> There's no
> reason anyone should take my word for this, but I didn't really "get
> it" about DVCSes until I actually tried using Git for a while.
> Something about it changes the user experience drastically in ways
> that are simply not obvious until you've gotten used to it.
>

In Christ,
Steven Watanabe

Dean Michael Berris

unread,
Jan 30, 2011, 2:36:27 AM1/30/11
to boost...@lists.boost.org
On Sun, Jan 30, 2011 at 6:22 AM, Steven Watanabe <watan...@gmail.com> wrote:
> AMDG
>
> On 1/29/2011 1:50 PM, Dave Abrahams wrote:
>>
>> On Sat, Jan 29, 2011 at 3:33 PM, Steven Watanabe<watan...@gmail.com>
>>  wrote:
>>>>
>>>> And having multiple maintainers maintaining a single "canonical" git
>>>> repo is the sweetest thing ever. Merging changes from many different
>>>> sources into a single "master" is actually *fun* as opposed to painful
>>>> with a centralized VCS.
>>>>
>>>
>>> What does this have to do with whether the
>>> repository is centralized or distributed?
>>
>> The fact that you can quickly try doing it several different ways
>> without affecting the "official repo" is a big plus.
>
> I don't understand.  Quickly try doing what?
>

The merge.

>> There's no
>> reason anyone should take my word for this, but I didn't really "get
>> it" about DVCSes until I actually tried using Git for a while.
>> Something about it changes the user experience drastically in ways
>> that are simply not obvious until you've gotten used to it.
>>

+1 to Dave's statement above.

--
Dean Michael Berris
about.me/deanberris

Anthony Foiani

unread,
Jan 30, 2011, 3:25:38 AM1/30/11
to boost...@lists.boost.org

Steven, greetings --

> On 1/28/2011 11:37 PM, Anthony Foiani wrote:
>> Please note, this might be from my inexperience, but I've found
>> that the only effective way to work on a "private branch" in SVN is
>> to check out the branch I care to modify in a separate directory.

Steven Watanabe <watan...@gmail.com> writes:
> You can use svn switch to point your working copy at a different
> branch.

Huh, I should have known/remembered that. Thanks for the tip!

(It still seems that I'd want a full clone to work on multiple
independent features simultaneously, but at least "cp -a" then "svn
switch" on the copy would cut out the second grab over the network.
The disk usage cost would be the same as how I use hg and git, too; I
suspect that "git stash" would cover some of my use cases, but I am
still learning.)

Thanks again!

Best regards,
Tony

Steven Watanabe

unread,
Jan 30, 2011, 10:49:01 AM1/30/11
to boost...@lists.boost.org
AMDG

On 1/29/2011 11:36 PM, Dean Michael Berris wrote:
> On Sun, Jan 30, 2011 at 6:22 AM, Steven Watanabe<watan...@gmail.com> wrote:
>> On 1/29/2011 1:50 PM, Dave Abrahams wrote:
>>>
>>> On Sat, Jan 29, 2011 at 3:33 PM, Steven Watanabe<watan...@gmail.com>
>>> wrote:
>>>>>
>>>>> And having multiple maintainers maintaining a single "canonical" git
>>>>> repo is the sweetest thing ever. Merging changes from many different
>>>>> sources into a single "master" is actually *fun* as opposed to painful
>>>>> with a centralized VCS.
>>>>>
>>>>
>>>> What does this have to do with whether the
>>>> repository is centralized or distributed?
>>>
>>> The fact that you can quickly try doing it several different ways
>>> without affecting the "official repo" is a big plus.
>>
>> I don't understand. Quickly try doing what?
>>
>
> The merge.
>

That's what I thought, but it didn't make sense
to me.

a) Merges in svn are always done locally first.
It doesn't change the repository until you
commit.
b) Why would I want to try it several different ways?
I always know exactly what I want to merge before
I start.
c) Even if I were merging by trial and error, I
still don't understand what makes a distributed
system so much better. It doesn't seem like it
should matter.

In Christ,
Steven Watanabe

Dean Michael Berris

unread,
Jan 30, 2011, 12:05:48 PM1/30/11
to boost...@lists.boost.org
On Sun, Jan 30, 2011 at 11:49 PM, Steven Watanabe <watan...@gmail.com> wrote:
> AMDG
>
> On 1/29/2011 11:36 PM, Dean Michael Berris wrote:
>>
>> On Sun, Jan 30, 2011 at 6:22 AM, Steven Watanabe<watan...@gmail.com>
>>  wrote:
>>>
>>> On 1/29/2011 1:50 PM, Dave Abrahams wrote:
>>>>
>>>>
>>>> The fact that you can quickly try doing it several different ways
>>>> without affecting the "official repo" is a big plus.
>>>
>>> I don't understand.  Quickly try doing what?
>>>
>>
>> The merge.
>>
>
> That's what I thought, but it didn't make sense
> to me.
>
> a) Merges in svn are always done locally first.
>   It doesn't change the repository until you
>   commit.

That's the same in git, except locally it's a repository too. So that
means you can back out individual commits that cause conflicts, choose
which ones you actually want to commit locally -- largely because the
local copy is a repository you can re-order commits, glob together
multiple commits into a single commit, edit the history to make it
nice and clean and manageable (i.e. globbing together small related
changes into a single commit). All these things you cannot do with
subversion because there's only ever one repository version.

> b) Why would I want to try it several different ways?
>   I always know exactly what I want to merge before
>   I start.

Which is also the point with git -- because you can choose which
changesets exactly you want to take from where into your local
repository. The fact that you *can* do this is a life saver for
multi-developer projects -- and because it's easy it's something you
largely don't have to avoid doing.

> c) Even if I were merging by trial and error, I
>   still don't understand what makes a distributed
>   system so much better.  It doesn't seem like it
>   should matter.
>

Because in a distributed system, you can have multiple sources to
choose from and many different ways of globbing things together.

I don't know if you follow how the Linux development model works, but
the short of it is that it won't work if they had a single repo that
everybody (as in 1000s of developers) touched. Even in an environment
where you had just 2 developers, by not having to synchronize
everything you're lowering the chance of friction -- and when friction
does occur, the "mess" only happens on the local repository, which you
can fix locally, and then have the changes reflected in a different
"canonical" repository.

For that matter, which repository is the "canonical" repository is
largely a matter of policy. In the Linux case it's the Linus
repository that is the "de facto, canonical" repository. If Linus'
repo is gone or suddenly the community stops trusting him and his
published repo, then the community can congregate around a different
repo.

--
Dean Michael Berris
about.me/deanberris

Scott McMurray

unread,
Jan 30, 2011, 2:44:26 PM1/30/11
to boost...@lists.boost.org
On Sat, Jan 29, 2011 at 06:42, Klaim <mjk...@gmail.com> wrote:
>
> I'm not sure it will be of  great help to you or others,
> but this page is meant to explain to people used to SVN
> the differences with decentralized source control tools and why it might be
> worth :
> http://hginit.com/00.html (it's about mercurial/hg but the decentralized way
> of doing things
> is globally the same with git and bazaar).
>

There's one great line in there that basically sums up everything you
need to know:

"[A DVCS] separates the act of committing new code from the act of
inflicting it on everybody else."

Just about everything that's different stems from that distinction.
(Pushing and pulling are how you choose who to inflict with your code
and by whose code you wish to be inflicted, respectively.)

~ Scott

Steven Watanabe

unread,
Jan 30, 2011, 6:38:42 PM1/30/11
to boost...@lists.boost.org
AMDG

On 1/30/2011 9:05 AM, Dean Michael Berris wrote:
> On Sun, Jan 30, 2011 at 11:49 PM, Steven Watanabe<watan...@gmail.com> wrote:
>> a) Merges in svn are always done locally first.
>> It doesn't change the repository until you
>> commit.
>
> That's the same in git, except locally it's a repository too. So that
> means you can back out individual commits that cause conflicts, choose
> which ones you actually want to commit locally

Okay, so this is just an instance of using
local commits--which as far as I can tell
is the only actual advantage of a DVCS.
I can understand someone wanting this although
it would probably not affect me personally,
since,
a) If I'm merging a lot of changes at once,
it's only because I'm synching 2 branches.
If I were trying to combine independent changes,
each one would get a separate commit anyway.
b) If I want to merge something and I get conflicts,
I'm probably going to resolve them instead of
reverting the changeset.

> -- largely because the
> local copy is a repository you can re-order commits, glob together
> multiple commits into a single commit, edit the history to make it
> nice and clean and manageable (i.e. globbing together small related
> changes into a single commit).

In svn, one commit is still one commit, no matter
how many changes you merge to create it. The notion
of editing the history to clean it up is not relevant.

>> b) Why would I want to try it several different ways?
>> I always know exactly what I want to merge before
>> I start.
>
> Which is also the point with git -- because you can choose which
> changesets exactly you want to take from where into your local
> repository. The fact that you *can* do this is a life saver for
> multi-developer projects -- and because it's easy it's something you
> largely don't have to avoid doing.
>

This doesn't answer the question I asked.

>> c) Even if I were merging by trial and error, I
>> still don't understand what makes a distributed
>> system so much better. It doesn't seem like it
>> should matter.
>>
>
> Because in a distributed system, you can have multiple sources to
> choose from and many different ways of globbing things together.
>

So, what I'm hearing is the fact that you
have more things to merge makes merging
easier. But that can't be what you mean,
because it's obviously nonsense. Come again?

> I don't know if you follow how the Linux development model works, but
> the short of it is that it won't work if they had a single repo that
> everybody (as in 1000s of developers) touched. Even in an environment
> where you had just 2 developers, by not having to synchronize
> everything you're lowering the chance of friction -- and when friction
> does occur, the "mess" only happens on the local repository, which you
> can fix locally, and then have the changes reflected in a different
> "canonical" repository.
>

Have you ever heard of branches? Subversion
does support them, you know.

In Christ,
Steven Watanabe

Steven Watanabe

unread,
Jan 30, 2011, 7:01:51 PM1/30/11
to boost...@lists.boost.org
AMDG

On 1/29/2011 1:50 PM, Dave Abrahams wrote:

> The fact that you can quickly try doing it several different ways
> without affecting the "official repo" is a big plus. There's no
> reason anyone should take my word for this, but I didn't really "get
> it" about DVCSes until I actually tried using Git for a while.
> Something about it changes the user experience drastically in ways
> that are simply not obvious until you've gotten used to it.
>

I've noticed. Using Git seems to incapacitate people
from using any other version control tool. I think
it should be banned as a public hazard.

In Christ,
Steven Watanabe

Scott McMurray

unread,
Jan 30, 2011, 7:16:58 PM1/30/11
to boost...@lists.boost.org
On Sun, Jan 30, 2011 at 15:38, Steven Watanabe <watan...@gmail.com> wrote:
>
> Okay, so this is just an instance of using
> local commits--which as far as I can tell
> is the only actual advantage of a DVCS.
>

I agree -- everything different about a DVCS is a consequence of
allowing local commits. The best way I've seen of phrasing that
fundamental difference:

"[A DVCS] separates the act of committing new code from the act of

inflicting it on everybody else." ~ <http://hginit.com/00.html>

Similarly, the differences between git and all the other DVCSs seem to
come from one of two decisions: 1) Store trees, not changes, and 2)
All parents in a merge are equal.

~ Scott

Dean Michael Berris

unread,
Jan 30, 2011, 7:35:36 PM1/30/11
to boost...@lists.boost.org
On Mon, Jan 31, 2011 at 7:38 AM, Steven Watanabe <watan...@gmail.com> wrote:
> AMDG
>
> On 1/30/2011 9:05 AM, Dean Michael Berris wrote:
>>
>> On Sun, Jan 30, 2011 at 11:49 PM, Steven Watanabe<watan...@gmail.com>
>>  wrote:
>>>
>>> a) Merges in svn are always done locally first.
>>>   It doesn't change the repository until you
>>>   commit.
>>
>> That's the same in git, except locally it's a repository too. So that
>> means you can back out individual commits that cause conflicts, choose
>> which ones you actually want to commit locally
>
> Okay, so this is just an instance of using
> local commits--which as far as I can tell
> is the only actual advantage of a DVCS.

Sure, if you choose to look at it that way and ignore all the other
good things DVCSes bring to the table.

> I can understand someone wanting this although
> it would probably not affect me personally,
> since,
> a) If I'm merging a lot of changes at once,
>   it's only because I'm synching 2 branches.
>   If I were trying to combine independent changes,
>   each one would get a separate commit anyway.

With subversion, each commit is a different revision number right?
Therefore that means there's only one state of the entire repository
including private branches, etc.

What then happens when you have two people trying to merge from two
different branches into one branch. Can you do this incrementally? How
would you track the single repository's state? How do you avoid
clobbering each other's incremental merges? Remember you're assuming
that you're the only one trying to do the merge on the same code in a
single repository. Consider the case where you have more than just you
merging from different branches into the same branch.

In git, merging N remote-tracking branches into a single branch is
possible with a single command on a local repo -- if you really wanted
to do it that way. Of course you already stated that you don't want
automated tools so if you *really* wanted to inspect the merge one
commit at a time you can actually do it interactively as well.

> b) If I want to merge something and I get conflicts,
>   I'm probably going to resolve them instead of
>   reverting the changeset.
>

Sure in which way there's largely no problem whether you're using git
or subversion. But in git with a multi-developer project, since you're
only basically touching your own repo most of the time and
synchronizing a canonical repo is mostly a matter of policy (who does
it, when, etc.). In the context of merging this means you can fix the
merge locally and then push to the canonical repo if you have the
rights to do it so that others can pull from that again and continue
on with their work (at their own pace).

With subversion what happens is everyone absolutely has to be on the
same page all the time and that's a problem.

>> -- largely because the
>> local copy is a repository you can re-order commits, glob together
>> multiple commits into a single commit, edit the history to make it
>> nice and clean and manageable (i.e. globbing together small related
>> changes into a single commit).
>
> In svn, one commit is still one commit, no matter
> how many changes you merge to create it.  The notion
> of editing the history to clean it up is not relevant.
>

Why is it not relevant?

In subversion, one commit can only happen if your working copy's
version is up to date with the repo's version of the same checked out
branch. In git, because you have a local repo, well your commits are
basically just on your repo -- if something changes upstream and you
want to get up to date, then you pull and merge the stuff locally.
This means you can still commit changes without clobbering others'
work (or clobbering others' work but locally) and then 1) if you're
the maintainer push to the canonical publicly accessible repo or if
you're not the maintainer 2) ask the maintainer to pull your changes
in via a merge that the maintainer does for you.

The maintainer can then do the adjustments on the history of the repo
-- things like consolidating commits, etc. -- which largely is really
what maintainers do, only with git it's just a lot easier. Of course I
realize that's a matter of taste and paradigm though so I think YMMV
depending on whether you can wrap your head around it or not.

>>> b) Why would I want to try it several different ways?
>>>   I always know exactly what I want to merge before
>>>   I start.
>>
>> Which is also the point with git -- because you can choose which
>> changesets exactly you want to take from where into your local
>> repository. The fact that you *can* do this is a life saver for
>> multi-developer projects -- and because it's easy it's something you
>> largely don't have to avoid doing.
>>
>
> This doesn't answer the question I asked.
>

Of course you're looking at the whole thing with centralized VCS in
mind. Consider the case that you have multiple remote branches you can
pull from. If you're the maintainer and you want to basically
consolidate the effort of multiple developers working on different
parts of the same system, then you can do this piece-meal.

For example, you, Dave Abrahams, and I are working on some extensions
to MPL. Let's just say for the sake of example.

I can have published changes up on my github fork of the MPL library,
and Dave would be the maintainer, and you would have your published
changes up on your github fork as well. Now let's say I'm not done yet
with what I'm working on but the changes are available already from my
fork. Let's say you tell Dave "hey Dave, I'm done, here's a pull
request". Dave can then basically do a number of things:

1.) Just merge in what you've done because you're already finished and
there's a pull request waiting. He does this on his local repo first
to run tests locally -- once he's done with that he can push the
changes to the canonical repo.

2.) Pull in my (not yet complete) changes first before he tries to
merge your stuff in to see if there's something that I've touched that
could potentially break what you've done. In this case Dave can notify
you to pull the changes I've already made and see if you can work it
out to get things fixed again. Or he can notify me and say "hey fix
this!".

3.) Ask me to pull your stuff and ask me to finish up what I'm doing
so that I can send a pull request that actually already incorporates
your changes when I'm done.

... ad infinitum.

With subversion, there's no way for something like this to happen with
little friction. First we can't be working on the same code anyway
because every time we try to commit we could be stomping on each
other's changes and be spending our time just cursing subversion as we
wait for the network traffic and spend most of our time just trying to
merge changes when all we want to do is commit our changes so that we
can record progress. Second we're going to have to use branches and
have "rebasing" done manually anyway just so that we can all stay
synchronized all the time -- which is sometimes largely unnecessary
until it's time to actually integrate changes. I can list more but
this reply is already taking longer than I expected so I'll stop it
short there.

>>> c) Even if I were merging by trial and error, I
>>>   still don't understand what makes a distributed
>>>   system so much better.  It doesn't seem like it
>>>   should matter.
>>>
>>
>> Because in a distributed system, you can have multiple sources to
>> choose from and many different ways of globbing things together.
>>
>
> So, what I'm hearing is the fact that you
> have more things to merge makes merging
> easier.  But that can't be what you mean,
> because it's obviously nonsense.  Come again?
>

Yes, that's exactly what I mean. Because merging is easy with git and
is largely an automated process anyway, merging changes from multiple
sources when integrating for example to do a "feature freeze" and
"stabilization" by the release engineering group is actually made
*fun* and easier than if you had to merge every time you had to commit
in an actively changing codebase.

>> I don't know if you follow how the Linux development model works, but
>> the short of it is that it won't work if they had a single repo that
>> everybody (as in 1000s of developers) touched. Even in an environment
>> where you had just 2 developers, by not having to synchronize
>> everything you're lowering the chance of friction -- and when friction
>> does occur, the "mess" only happens on the local repository, which you
>> can fix locally, and then have the changes reflected in a different
>> "canonical" repository.
>>
>
> Have you ever heard of branches?  Subversion
> does support them, you know.
>

And have you tried merging in changes from N different branches into
your private branch in Subversion to get the latest from other
developers working on the same code? Because I have done this with git
and it's *trivial*.

Also are you really suggesting that Linux development would work with
thousands of developers using subversion to do branches? Do you expect
anybody to get anything done in that situation? And no that's not a
rhetorical question.

HTH

--
Dean Michael Berris
about.me/deanberris

Dean Michael Berris

unread,
Jan 30, 2011, 7:40:26 PM1/30/11
to boost...@lists.boost.org
On Mon, Jan 31, 2011 at 8:01 AM, Steven Watanabe <watan...@gmail.com> wrote:
> AMDG
>
> On 1/29/2011 1:50 PM, Dave Abrahams wrote:
>>
>> The fact that you can quickly try doing it several different ways
>> without affecting the "official repo" is a big plus.  There's no
>> reason anyone should take my word for this, but I didn't really "get
>> it" about DVCSes until I actually tried using Git for a while.
>> Something about it changes the user experience drastically in ways
>> that are simply not obvious until you've gotten used to it.
>>
>
> I've noticed.  Using Git seems to incapacitate people
> from using any other version control tool.  I think
> it should be banned as a public hazard.
>

I OTOH pity the ones stuck with subversion. Learning and leveraging
Git effectively is like discovering the world is really round and that
there's actually a different and largely better way of working with
code.

How does that saying go, once you go black...

--
Dean Michael Berris
about.me/deanberris

Edward Diener

unread,
Jan 30, 2011, 10:13:16 PM1/30/11
to boost...@lists.boost.org
On 1/30/2011 7:16 PM, Scott McMurray wrote:
> On Sun, Jan 30, 2011 at 15:38, Steven Watanabe<watan...@gmail.com> wrote:
>>
>> Okay, so this is just an instance of using
>> local commits--which as far as I can tell
>> is the only actual advantage of a DVCS.
>>
>
> I agree -- everything different about a DVCS is a consequence of
> allowing local commits. The best way I've seen of phrasing that
> fundamental difference:
>
> "[A DVCS] separates the act of committing new code from the act of
> inflicting it on everybody else." ~<http://hginit.com/00.html>

But of course when you eventually merge all your local commits which Git
encourages you to keep to yourself until you are good and ready to
combine them with other people's repositories and local commits, it all
works flawlessly and easily without any question of conflicts and the
resulting code is just perfect because Git is so good.

Russell L. Carter

unread,
Jan 30, 2011, 11:54:18 PM1/30/11
to boost...@lists.boost.org

On 01/30/2011 08:13 PM, Edward Diener wrote:
> On 1/30/2011 7:16 PM, Scott McMurray wrote:
>> On Sun, Jan 30, 2011 at 15:38, Steven Watanabe<watan...@gmail.com>
>> wrote:
>>>
>>> Okay, so this is just an instance of using
>>> local commits--which as far as I can tell
>>> is the only actual advantage of a DVCS.
>>>
>>
>> I agree -- everything different about a DVCS is a consequence of
>> allowing local commits. The best way I've seen of phrasing that
>> fundamental difference:
>>
>> "[A DVCS] separates the act of committing new code from the act of
>> inflicting it on everybody else." ~<http://hginit.com/00.html>
>

Hi!

Idle comment from the peanut gallery here.

If git is so good, and it is so easy to maintain and merge between
branches, why not fork boost into a pure git canonical repo, (not the
ridiculously complicated setup(s) done previously) and then maintain
patches in a single dedicated svn branch for the svn dead enders, to
be applied whenever that makes sense? DVCS merging competence is
supposed to be bidirectional.

(In the biz 32 years now, cvs user for 20+, svn since the beginning,
git for 3+)

Russell

(submerging again, but not subversioning anymore... thank you git-svn)

Brian Wood

unread,
Jan 31, 2011, 2:20:24 AM1/31/11
to boost...@lists.boost.org
Edward Diener:

>On 1/30/2011 7:16 PM, Scott McMurray wrote:
>>
>>
>> I agree -- everything different about a DVCS is a consequence of
>> allowing local commits. The best way I've seen of phrasing that
>> fundamental difference:
>>
>> "[A DVCS] separates the act of committing new code from the act of
>> inflicting it on everybody else." ~<http://hginit.com/00.html>
>
> But of course when you eventually merge all your local commits which Git
> encourages you to keep to yourself until you are good and ready to
> combine them with other people's repositories and local commits,

I think that's called chewing the cud. I'm not a Git user, but it
reminds me of the saying, "Don't call us, we'll call you."

I find this thread interesting so thanks to all involved. And thanks
for not top-posting.

--
Brian Wood
Ebenezer Enterprises
http://webEbenezer.net
(651) 251-9384

Steven Watanabe

unread,
Jan 31, 2011, 4:20:05 PM1/31/11
to boost...@lists.boost.org
AMDG

On 1/30/2011 4:35 PM, Dean Michael Berris wrote:
> On Mon, Jan 31, 2011 at 7:38 AM, Steven Watanabe<watan...@gmail.com> wrote:
>> I can understand someone wanting this although
>> it would probably not affect me personally,
>> since,
>> a) If I'm merging a lot of changes at once,
>> it's only because I'm synching 2 branches.
>> If I were trying to combine independent changes,
>> each one would get a separate commit anyway.
>
> With subversion, each commit is a different revision number right?
> Therefore that means there's only one state of the entire repository
> including private branches, etc.
>

Yes, but I don't see how that's relevant.
How can the repository be in more than one state?
Now if only we had quantum repositories...

> What then happens when you have two people trying to merge from two
> different branches into one branch. Can you do this incrementally?

What do you mean by that? I can merge any subset
of the changes, so I can split it up if I want to,
or I can merge everything at once.

> How
> would you track the single repository's state?

Each commit is guaranteed to be atomic.

> How do you avoid
> clobbering each other's incremental merges?

If the merges touch the same files, the
second person's commit will fail. This is
a good thing because /someone/ has to resolve
the conflict. Updating and retrying the commit
will work if the tool can handle the merge
automatically. (I personally always re-run
the tests after updating, to make sure that
I've tested what will be the new state of
the branch even if there were no merge conflicts.).

> Remember you're assuming
> that you're the only one trying to do the merge on the same code in a
> single repository. Consider the case where you have more than just you
> merging from different branches into the same branch.
>
> In git, merging N remote-tracking branches into a single branch is
> possible with a single command on a local repo -- if you really wanted
> to do it that way.

This would require N svn commands. (Of course
if I did it a lot I could script it. It really
isn't a big deal.).

> Of course you already stated that you don't want
> automated tools so if you *really* wanted to inspect the merge one
> commit at a time you can actually do it interactively as well.
>

I didn't say that I didn't want automated tools.
I said that I didn't trust them. With svn that
means that, before I commit I always
a) run all the relevant tests
b) review the full diff

This is regardless of whether I'm committing
new changes or merging from somewhere else.

>> b) If I want to merge something and I get conflicts,
>> I'm probably going to resolve them instead of
>> reverting the changeset.
>>
>
> Sure in which way there's largely no problem whether you're using git
> or subversion. But in git with a multi-developer project, since you're
> only basically touching your own repo most of the time and
> synchronizing a canonical repo is mostly a matter of policy (who does
> it, when, etc.). In the context of merging this means you can fix the
> merge locally and then push to the canonical repo if you have the
> rights to do it so that others can pull from that again and continue
> on with their work (at their own pace).
>
> With subversion what happens is everyone absolutely has to be on the
> same page all the time and that's a problem.
>

It isn't a problem unless you're editing the same
piece of code in parallel. If you find that you're
stomping on each other's changes a lot

a) The situation is best avoided to begin with. The
version control tool can only help you so much, no
matter how cool it is. No tool is ever going to
be able to resolve true merge conflicts for you.
b) Working in branches will buy you about as much as
using a DVCS as far as putting off resolving
conflicts is concerned.

Honestly, if you assume the worst case, and
don't use the tool intelligently, you're bound
to get in trouble. I'm sure that I could invent
cases where I get myself in trouble (mis-)using
git that work fine with svn.

> The maintainer can then do the adjustments on the history of the repo
> -- things like consolidating commits, etc. -- which largely is really
> what maintainers do,

Is it? I personally don't want to spend a lot of
time dealing with version control--and I don't.
The vast majority of my time is spent writing code
or reviewing patches or running tests. All of
which are largely unaffected by the version control
tool.

> only with git it's just a lot easier.

It isn't just easier with git, it's basically impossible
with svn. In svn, the history is strictly append only.
(Of course, some including me see this as a good thing...)

4.) Dave isn't paying attention, so nothing happens. A couple
years later, after we've both moved on to other things, he
notices my changes and decides that they're good and merges
them. ...More time passes... He sees your changes and
they look reasonable, so he tries to merge them. He gets
a merge conflict and then notifies you asking you to update
your feature. You are no longer following Boost development,
so the changes get dropped on the floor. ...A few more years
go by... Another developer finds that he needs your stuff.
He resolves the conflicts with the current version and the
changes eventually go into the official version.

This is something like how things seem to work in practice now,
and I don't see how using a different tool is going to change it.

> With subversion, there's no way for something like this to happen with
> little friction.

Why not? Replace "github fork" with "branch" and
subversion supports everything that you've described.

> First we can't be working on the same code anyway
> because every time we try to commit we could be stomping on each
> other's changes and be spending our time just cursing subversion as we
> wait for the network traffic and spend most of our time just trying to
> merge changes when all we want to do is commit our changes so that we
> can record progress. Second we're going to have to use branches and
> have "rebasing" done manually anyway just so that we can all stay
> synchronized all the time --

What do you mean by "rebasing." Subversion has no
such concept. If you want to stay synchronized
constantly, you can. If you want to ignore everyone
else's changes, you can. If you want to synchronize
periodically, you can. If you want to take specific
changes, you can. What's the problem?

>>>> c) Even if I were merging by trial and error, I
>>>> still don't understand what makes a distributed
>>>> system so much better. It doesn't seem like it
>>>> should matter.
>>>>
>>>
>>> Because in a distributed system, you can have multiple sources to
>>> choose from and many different ways of globbing things together.
>>>
>>
>> So, what I'm hearing is the fact that you
>> have more things to merge makes merging
>> easier. But that can't be what you mean,
>> because it's obviously nonsense. Come again?
>>
>
> Yes, that's exactly what I mean.

Apparently not, since your answer flips around
what I said.

> Because merging is easy with git and
> is largely an automated process anyway,

If you will recall, the question I started out with
is: "What about a distributed version control system
makes merging easier?" That question remains unanswered.
The best I've gotten is "git's automated merge is smart,"
but it seems to me that this is orthogonal to the fact
that git is a DVCS.

> merging changes from multiple
> sources when integrating for example to do a "feature freeze" and
> "stabilization" by the release engineering group is actually made
> *fun* and easier than if you had to merge every time you had to commit
> in an actively changing codebase.
>

I've never run into this issue.
a) Boost code in general isn't changing that fast.
b) My commits are generally "medium-sized." i.e.
Each commit is a single unit that I consider
ready to publish to the world. For smaller units,
I've found that my memory and my editor's undo
are good enough. Now, please don't tell me that
I'm thinking like a centralized VCS user. I know
I am, and I don't see a problem with it, when I'm
using a centralized VCS.
c) There's nothing stopping you from using a branch to
avoid this problem. If you're unwilling to use
the means that the tool provides to solve your
issue, then the problem is not with the tool.

>>> I don't know if you follow how the Linux development model works, but
>>> the short of it is that it won't work if they had a single repo that
>>> everybody (as in 1000s of developers) touched. Even in an environment
>>> where you had just 2 developers, by not having to synchronize
>>> everything you're lowering the chance of friction -- and when friction
>>> does occur, the "mess" only happens on the local repository, which you
>>> can fix locally, and then have the changes reflected in a different
>>> "canonical" repository.
>>>
>>
>> Have you ever heard of branches? Subversion
>> does support them, you know.
>>
>
> And have you tried merging in changes from N different branches into
> your private branch in Subversion to get the latest from other
> developers working on the same code? Because I have done this with git
> and it's *trivial*.
>

I've never wanted to do this, but unless there
are conflicts, it should work just fine. If
there are conflicts, you're going to have to
resolve them one way or another regardless of
the version control tool.

> Also are you really suggesting that Linux development would work with
> thousands of developers using subversion to do branches? Do you expect
> anybody to get anything done in that situation? And no that's not a
> rhetorical question.
>

It might overload the server. That's a legitimate
concern. But other than that, I don't see why not.
(However, since I have nothing to do with Linux
development, I may be totally wrong.)

In Christ,
Steven Watanabe

Klaim

unread,
Jan 31, 2011, 5:40:54 PM1/31/11
to boost...@lists.boost.org

On Mon, Jan 31, 2011 at 22:20, Steven Watanabe <watan...@gmail.com> wrote:
If you will recall, the question I started out with
is: "What about a distributed version control system
makes merging easier?"  That question remains unanswered.

Hi!

Sorry to interrupt, but I just googled this question (in fact directly used keywords in stackoverflow) and got this question/answer that might (or might not) clarify the "why" : 


Basically : 

"The hassle in CVS/SVN comes from the fact that these systems do not remember the parenthood of changes. In Git and Mercurial, not only can a commit have multiple children, it can also have multiple parents!"
A full explaination is done in the (validated) answer.

However it might be that there are other reasons helping the merges, but I guess this answer describe the main differencies between CVS/SVN and Mercurial/Git on the merge point.

Hope it helps.

Joël Lamotte.

Anthony Williams

unread,
Jan 31, 2011, 5:47:32 PM1/31/11
to boost...@lists.boost.org
Steven Watanabe <watan...@gmail.com> writes:

>> How do you avoid
>> clobbering each other's incremental merges?
>
> If the merges touch the same files, the
> second person's commit will fail. This is
> a good thing because /someone/ has to resolve
> the conflict. Updating and retrying the commit
> will work if the tool can handle the merge
> automatically. (I personally always re-run
> the tests after updating, to make sure that
> I've tested what will be the new state of
> the branch even if there were no merge conflicts.).

Right. This is one area where I get the most value from a DVCS
(YMMV). When someone has done conflicting changes, you can commit your
changes locally, so they are kept safe as a coherent whole. Only when
you try and push to the shared repository do you get merge
conflicts. When you've resolved the merge conflicts, you can then commit
a new version locally, and push that to the main repo. The final
revision history will now show your changes and the merge as separate
entries in the log, and if you mess up the merge it's easy to revert
back to your private state and try again. With subversion, unless you
are working on a private branch, then if someone else makes conflicting
changes before you check your code in then you have to merge their
changes into your working directory before you can commit. Unless you
save your changes first locally (e.g. in a zip file, or a backup
directory), then if you mess up the merge you might well lose your local
changes too.

Anthony
--
Author of C++ Concurrency in Action http://www.stdthread.co.uk/book/
just::thread C++0x thread library http://www.stdthread.co.uk
Just Software Solutions Ltd http://www.justsoftwaresolutions.co.uk
15 Carrallack Mews, St Just, Cornwall, TR19 7UL, UK. Company No. 5478976

It is loading more messages.
0 new messages