SVN for Maya

253 views
Skip to first unread message

Zac

unread,
Dec 28, 2008, 9:14:40 AM12/28/08
to python_inside_maya
Hi all,

I have started work on a Maya plugin for integration of Subversion
(SVN) Version Control.

This is my first plugin for Maya, but not my first time with Python or
SVN.

I am using:
pysvn for Subversion integration
PyQT4 for GUI

I am have registered a project on Google Code and am committing to the
SVN as I am developing.

http://code.google.com/p/svnformaya/

If there is anyone out that that would be interested in helping out
with this project, please contact me.

Regards,
Zac

Farsheed Ashouri

unread,
Dec 28, 2008, 1:16:46 PM12/28/08
to python_in...@googlegroups.com
Nice work, Thank you.
Sincerely,
Farsheed.

Zac Shenker

unread,
Dec 29, 2008, 4:01:04 AM12/29/08
to python_in...@googlegroups.com
Hi Farsheed,

Thanks for the comments. This project is still very much a work in progress, it still has a number of hardcoded variables and is missing a reasonable amount of the interface integration.

If there is any one out there interested in this project I would be happy to discuss design and goals of the project a bit further before coding much further.

Regards,
Zac

Hradec

unread,
Dec 29, 2008, 5:14:48 AM12/29/08
to python_in...@googlegroups.com

Sounds really intersting man... I have being thinking about the potential of using svn in maya for a sort of asset management system for quite some time, which is a concept that your project could be used for I think...

I would be really interested in discuss more about the subject, and maybe even contribute to your project...

congrats for the effort and initiative...

-H

Hradec

unread,
Dec 29, 2008, 5:15:39 AM12/29/08
to python_in...@googlegroups.com

Hradec

unread,
Dec 29, 2008, 5:16:49 AM12/29/08
to python_in...@googlegroups.com
Sounds really intersting man... I have being thinking about the potential of using svn in maya for a sort of asset management system for quite some time, which is a concept that your project could be used for I think...

I would be really interested in discuss more about the subject, and maybe even contribute to your project...

congrats for the effort and initiative...

-H


On Mon, Dec 29, 2008 at 1:01 AM, Zac Shenker <zsh...@gmail.com> wrote:

Sebastian Thiel

unread,
Dec 29, 2008, 5:33:46 AM12/29/08
to python_in...@googlegroups.com
Who will be your target audience ? Will it be programmers that want to use your integration in their environments,  or will it be users that just need nice user interfaces ?

Can you target both ?

From my perspective,  pysvn already did the job for me. It does not come with any user interfaces, but for pipeline work, these can be very very simple.

Something I had to improve though is the pysvn client to be a little easier to use when trying to check-out or update files in a deep directory structure which does not yet exist locally. If your project, in addition to that, would provide a nice modular user interface, it could workout well.

Zac Shenker

unread,
Dec 29, 2008, 6:01:47 AM12/29/08
to python_in...@googlegroups.com
At this stage the main focus is to provide a solution for the end users, but in saying that where possible I am building Python Classes that provide alot of useful functionality so it certainly could be useful for developers as well.

Personally I have quite alot of experience with Python and SVN but am new to programming for Maya.

I would like to ultimately include features that improve the pipeline and workflow processes, such as:
Going through the Maya scene files and discover file textures that are no longer being used.
Provide Version Control over all assests in the project
Easy to use management of revisions and file locks
Easy to use support for branching.
Possibly some basic support for automated merging and conflict resolution.

Regards,
Zac

Sebastian Thiel

unread,
Dec 29, 2008, 12:00:32 PM12/29/08
to python_in...@googlegroups.com
This whole asset management part sounds like a general pipeline approach, with svn as data storage and perhaps database ( ->properties ).

Doing that would properly would make many people, including myself, quite happy as this is one major backbone of any pipeline.

Not being too familiar with Maya is probably won't be too much of a problem as you can boil the system down to actually know about maya in the end, by plugging it into your framework.

Thus whenever one says asset management, I do not see maya in the first place, but files and how they are organized. This is useful for many other applications as well, that could play together with your asset management system.

Perhaps I am over-generalizing making things more complicated than they actually are, and its absolutely fine to start 'sane' and focus on one application at first to gain first experiences with asset management.

Lets see how it goes for you - in my head it looks like a large project requiring a good foundation.

Zac Shenker

unread,
Dec 30, 2008, 1:13:52 AM12/30/08
to python_in...@googlegroups.com
Ideally what I would like to produce is a tight integration of an asset management solution into the Maya interface.

As you suggested it would probably be worthwhile to develop the asset management side in a form that is independent of Maya and then write a Maya plugin that makes use of the asset management code.

I am now considering developing the asset management side to support more than just SVN, at this stage looking at CVS, SVN, git but the system design would allow for integration of any version control system that one wants to write a module for.

So far my asset management experience includes using CVS & SVN on code projects and trying to use SVN on Maya projects.

Zac

Sebastian Thiel

unread,
Dec 30, 2008, 2:04:43 AM12/30/08
to python_in...@googlegroups.com
Supporting all possible back-ends means you would generalize the version control system API to support any possible one. This also means you have to put additional work in writing the middleware, and possibly stick to the smallest feature commonality between all the actual back-ends you want to use.

To my point of view, dedicating to one backend will not only get you started faster, choosing svn for that is probably not even a bad thing as it is working wonderfully in general and with maya.

A compromise would be to subclass the pysvn Client class providing your own svn-style interface ( possibly improving the pysvn client ), which would give you the option to use it as an adapter to another version control backend later on.

To nicely use svn within maya, you basically need the following ( keeping maya on multiple os's in mind ):
  • setup svn repository such that all resource files ( .ma, textures, caches, possibly everything ) requires a lock
  • Assure scene are truly uptodate when maya opens them ( using Scene Messages/Callbacks )
  • Possibly support two modes of operation:
    1. All local
      • People are working fully decentralized ( ok, they have the svn repo on some server ) and need an up-to-date local copy of their files on local disk.
      • Update scene files and all references/resources upon scene open ( requiring you to hook into maya a bit using callbacks )
    2. Partially Local, All on Network
      • People working together in one intranet usually prefer to update only the file they are currently working on ( i.e. the ma or mb file itself ) and want all resources to  be pulled from the network server
      • Setup an svn hook to keep a server side checkout of your data uptodate at all times
I use approach no. 2 as updating files is costly ( even though the actual process takes less then a second, its a delay people feel ).
To be truly general though, you would need to support both ways, requiring you to have some sort of config system to set you up accordingly ( .ini files ? .xml ? ).

The similarities between 1 and 2 are mainly that at least one file needs to be updated before the scene actually opens ( the scene itself ). In 1. you additionally parse all dependencies ( recursively, possibly slow if not cached ) and update these dependencies as well.

Here you see that approach one gets complex and time consuming, but would need to be supported if the hobbyist's requirements should be met, allowing collaboration due to the svn server.

Zac Shenker

unread,
Dec 30, 2008, 2:33:07 AM12/30/08
to python_in...@googlegroups.com
I see the middleware as not being too much of an issue as the main components that change in significant ways between Version Control systems are home they handle merging and conflicts which wouldn't be used in this case.

The way I see it the major required functions would be:
Import, Checkout, Update, Add, Commit, Lock/Unlock

In terms of the two situations that you talked about it really sounds like it comes down to what files are kept under version control. In the case of All Local, all the files related to the project would be stored in Version Control. In the network situation only the files that are being changed regularlly would be on version control the rest would just we on the network fileshare. The issue I see with this is that you then loose the advantages of version control on a significant number of assets so that you don't have to wait a few seconds to grab the latest files.

Personally to me, working on a local copy makes the most sense to me if you want the most flexible workflow as you would be able to easily go back to any earlier versions of any of your assets not just some of them.
I guess another solution is to seperate your Version Control Repository into 2, one that you always keep a local copy of such as your scene files. Another that you use to update what is on your network fileserver. In that way files on the network fileserver are never changed directly only through SVN so there is still version control in effect.

Zac

Ian Jones

unread,
Dec 30, 2008, 2:59:51 PM12/30/08
to python_inside_maya
I would highly recommend looking up an older project called mayasvn
(http://mayasvn.sourceforge.net/) I thought it tackled this problem in
a fairly elegant way (at least on the user side) as it implemented
script hooks to BeforeOpen/AfterSave which allowed the svn handling to
complement the standard file open/save etc dialogs nicely.

I've got custom modifications for handling svn auth somewhere - I'll
try and dig it all up and post it tonight for reference.

Ian

Chad Dombrova

unread,
Dec 30, 2008, 4:02:31 PM12/30/08
to python_in...@googlegroups.com
hi all,

i agree that the system would be best as a general asset management system, perhaps with sub-modules for different applications:

cgsvn
cgsvn.apps.maya
cgsvn.apps.nuke

  1. Partially Local, All on Network
    • People working together in one intranet usually prefer to update only the file they are currently working on ( i.e. the ma or mb file itself ) and want all resources to  be pulled from the network server
    • Setup an svn hook to keep a server side checkout of your data uptodate at all times

in my mind, the server-side checkout is the most complicated part of the system and the most crucial to get right.   

in a normal production environment, there are different artist working on many shots, and each might require different revisions of the revision-controlled assets.  unfortunately, to keep things simple, SVN is designed with a great deal of redundancy in mind -- each working might share 80% of the same files -- which doesn't matter much when working with text files, but for binary assets totaling in the hundreds of gigabytes range,  this redundancy becomes untenable.  so the question is: what is the best way to simultaneously provide multiple users any revision of the asset needed, while keeping rampant disk usage in check?

some ideas for creating a production-friendly asset management system using SVN:

  • create a directory on the server which will be the "network store" and register it with the "cgsvn" server ( perhaps through a config file )
  • add a network checkout mode to the cgsvn python api. when checking out a file in this mode, the file is checked out to the network store location, and a symbolic link is created in the working copy that points to this network file.  
  • users should never directly interact with the files checked out to this network store.  the files themselves could be named with a non-human readable hash unique per file, so that each name is unique, even for multiple revisions of the same file.  ( i believe that svn uses a hash to identify each file already, so we could use the same hash. it might even be available via pysvn api. )
  • when the user updates a networked file in their working copy, the cgsvn server first checks to see if the desired update revision already exist in the network store (because another working copy is already using it as a networked checkout).  if exists, the server simply updates the symbolic link in the working copy to point to the network file. otherwise, it checks out the file to the network store and then makes/updates the symlink. after the update is completed, the cgsvn server checks if the version that was just replaced is currently used by any other working copies.  if not, it is removed to save disk space.  optionally, an expiration time can be specified, so that if it has been unused for more than x days, it is removed.

advantages:

  • avoids creating redundant copies of enormous assets, thereby dramatically reducing network traffic, storage space, and transfer delays.
  • maintains the svn "working copy" paradigm, where ALL required files are represented within the working copy.  using environment variables and/or relative paths throughout maya/shake/nuke scenes, the entire working copy could be relocated, even between local and network disks.
  • a mechanism could be provided to switch the network store symlinks to point to a "local store" on the local hard disk for performance gains.  this local store would be organized just like the network store, but would only contain files requested for local operation.

disadvantages:

  • symbolic links are not fully supported on windows XP (it supports something called hard links, but i'm not sure what the difference is or whether they are posix compatible ).  i've read that symbolic links are supported on Vista, but i have not tried it out yet.   i will do some more research on the subject.

some other thoughts:

  • trac (http://trac.edgewall.org) is based on svn, mysql, and python and provides its own api which is svn-agnostic and therefore more future-proof.  it provides higher level functionality than pysvn api.  i found it much easier to use, but it might be too high level for this type of project, considering how much low level svn interaction will be necessary to accomplish what is needed.
  • distributed systems such as git and mercurial would not work in a production environment because each working copy contains the entire repository, which could be multiple terabytes of data.
  • svn would be my choice of cvs because it is actively being developed and it provides hooks for using custom diff and merge tools, which would let us create and integrate custom tools for each application -- maya, nuke, shake, etc -- that give more meaningful diffs than a straight text-based diff.


i'm very interested in this project and would love to collaborate, but from my standpoint the maya integration would necessarily come after the creation of a more general revision control aimed at a CG production environment.  i'm in the middle of doing research for my own solution, but it was not until i read this thread that my ideas really crystallized.  i'm eager to hear everyone's feedback.  great discussion, keep it up.


-chad

Robert Durnin

unread,
Dec 30, 2008, 4:47:40 PM12/30/08
to python_in...@googlegroups.com
Ive got to chime in and offer up this kit that a friend recently
passed on to me:

http://sourceforge.net/projects/gto/

Its a 3d-package agnostic geo anim caching format that has been built
strictly enough to allow for usage on "locations in space" of all
types in 3d and how to rebuild them in your renderer.

I have been looking at some of these problems for a while, and believe
a large part of the issue with tracking scenes disappears when you are
able to break off "revisions" from "versions" and apply tracking only
to data that is being passed around inside the pipeline (and therefore
between departments/apps). If I am not being clear enough I mean
really that tracking "saved" scenes is the greatest obstacle to an
asset management tool which is almost completely circumvented when the
team only requires to pass around raw (ascii) data... in the case of
gto, streamlined data which can also be importance sampled and whose
integrity can, for the most part, be reacquired through diffable
versions.

Breaking off the idea of a revision (a saved scene file which is used
to produce an asset) from a version (an asset which is generated by
some process of development and approval) greatly reduces the need to
rely on having to pass around binary scene files which contain a
myriad of dependency nodes (especially in the case of maya)... this
largely defines a workflow spec for facilities which may be hard to
adopt, but, in my mind at least, is much easier to develop alongside
of a set of classes with methods for integration for deconstructing
for compatibility with a database than an all-encompassing toolset
which is trying to find sneaky ways of overcoming the limitations of
binary dag flows.

Chad: geo -> anim cache-> assembly: working backwards from the end
result towards what you want to extract from the scene and what needs
to be "gone back to" or "spliced" into new scenes for further review.

Working entirely in maya also undermines the ability to pass an
environment alongside of the data which can be used in other apps
which might need to manipulate it: Not all apps have a functioning
scripting interface which can force via the asset management system
calls to the os, and building a set of interfaces for inside of each
app separately is a waste of time. I can forsee coming up with a kind
of virtual-desktop or virtual-console which could be used to proc new
apps or visit the database and perform some actions on the results,
but basing it in maya is closing off a lot of the other shop.

Robert

Sebastian Thiel

unread,
Dec 30, 2008, 4:51:50 PM12/30/08
to python_in...@googlegroups.com
I had a look at the project: The plugin providing access to api messages would probably be quite useful if one is limited to using mel.
As the asset management system  could use these callbacks directly through python, the plugin would not be required.

Your intro-page gives a nice idea of what users might want - so it's worth reading and thinking about it when designing a more general asset management approach.

Ian Jones schrieb:

Sebastian Thiel

unread,
Dec 30, 2008, 6:46:01 PM12/30/08
to python_in...@googlegroups.com
Lets see whether I get this right:
You are saying that having an asset management system just solves one part of the problem.

paraphrasing robert
If an asset management system is supposed to store resources and their dependencies between each other, in the end you want these resources to end up in some final result ( like a movie ). Thinking about how these resources are organized and passed on between team-members and departments is the crucial ( and the second part of the problem ).

The gto file format and it's command line tools can be used to extract production values from applications ( and their proprietary file formats ) into a common one, creating a new version. This file can be combined/merged/diffed with gto tools to proceed in the pipeline to build the final product.

I agree to the general concept, but think it's yet another part of a pipeline in general. If something like a gto pipeline is done right, the asset management could in fact just focus on simple revision control of application files from which values are extracted and possibly stored in a different and implicitly versioned layout.


chads idea
As this discussion focusses on asset management functioning as dependency-aware revision control system, I will now come back to Chad's ideas:

Chads stores files on a storage (i.e. network ) location and a local one, using symlinks to link a static local path to whichever revision of a resource located on the storage location. This emphasizes on easy revision switching ( of references for instance ) as it is made easy. He also mentioned that this is something working in the linux world - ntfs junctions ( hardlinks ) only work on the respective local storage and cannot link to networkshares/mounts on a different device ( Don't know about Vista though ).
Keeping multiple check-outs of multiple revisions of the same file on the network would require different check-out folders as svn cannot check-out a file in different revisions into the same folder afaik. This asset management approach implies a folder structure that you build in some storage location.

my approach to local/storage locations
When I brought this local/network data-mix up, I actually had something different in my mind:
The network location is mounted write protected and may only be modified by subversion itself. Its clearly not supported that some people use different versions of some resources as only the latest version will be found on the storage location ( it's always up-to-date ).
If a resource affects other resources ( like a rigged character affects scene 1 and scene 2 that both use it ), and the affecting resource breaks affected resource 1 , but not resource 2, in my approach you can do nothing more than branching the working revision of the affecting resource specifically for resource 2, creating a new copy of it. Alternatively, and this is my main purpose of revision control ( so far ), one would revert the affecting resource and try again.
Not using symlinks keeps it simple(r) and assures os compatibility.


about paths
So far, we have not really talked about how to handle paths: Using environment variables works in maya, using dirmap as well. Other applications might not be able to resolve variables. In the end, each application might require it's own approach  to handle  different storage locations and to possibly switch between them.

git
I wouldn't say that git couldn't do the job, but unfortunately I am not a specialist on it. As far as I could figure out, it stores files much more efficiently when they are checked-out locally as it does not store the revision base files for each checked-out file ( effectively doubling the amount of space used ).
Also it has a git-server which would allow to have a centralized storage if required.
It appears though that git cannot update/retrieve just one file though, which is required to start working locally on a locked file. Speaking of which: Is it possibly to lock files in git ? I think its not a built-in feature of the system.

So in the end, svn seems to be favorable, although it's not perfect. For me it does the job pretty well though.

conclusion
That's hell of a topic Hradec poked into. If we could figure out the 'one' way to get it right for most of the target users, I would be glad. Unfortunately I do not see it coming yet as a thread/mailing list is probably not the best way to efficiently gather information as it gets lost in the pure mass of lines.

So am I - lost ( for today ;) ).

Robert Durnin

unread,
Dec 30, 2008, 7:16:33 PM12/30/08
to python_in...@googlegroups.com
On Tue, Dec 30, 2008 at 3:46 PM, Sebastian Thiel
<byro...@googlemail.com> wrote:
> Lets see whether I get this right:
> You are saying that having an asset management system just solves one part
> of the problem.
>
> paraphrasing robert
>
> If an asset management system is supposed to store resources and their
> dependencies between each other, in the end you want these resources to end
> up in some final result ( like a movie ). Thinking about how these resources
> are organized and passed on between team-members and departments is the
> crucial ( and the second part of the problem ).
>
> The gto file format and it's command line tools can be used to extract
> production values from applications ( and their proprietary file formats )
> into a common one, creating a new version. This file can be
> combined/merged/diffed with gto tools to proceed in the pipeline to build
> the final product.
>
> I agree to the general concept, but think it's yet another part of a
> pipeline in general. If something like a gto pipeline is done right, the
> asset management could in fact just focus on simple revision control of
> application files from which values are extracted and possibly stored in a
> different and implicitly versioned layout.
>

That is, for the most part, what I am saying, at least as far as
finding a reason to integrate subversion (or version control) into
maya. The biggest obstruction to making it work is also, in my mind,
worth getting around completely and not just stepping over. If the
workflow compliments the pipeline artists can keep working versions of
scenes in their own sandbox or shot-sandbox areas and the asset
management system can remain focused on storing data and dependencies.

But this toolset is only as good as its ability to function within a
greater schema, and that requires the ability to interface with the
same database without having to depend on a scripted interface or
embedded shell like the one available through maya... although the
fancy hooks would be a compliment to a small scale facility it wouldnt
really work once you try and take on other softwares, not without
having to build alike interfaces that, again, and again.
Ive got lots to say on this one, but will try to be brief.

We came up with some solutions in the past which focused on trying to
proc everything within python and building classes which contained
environment data-members for managing (and storing with pickle) the
execution environment so that tasks could be reopened remotely and
then executed in series on the same machine with the correct
environment per task (we wanted to be able to use dependencies to
create task lists which could be run remotely on a queue/render
queue): Python file streams (Popen etc) are not meant to be opened
infinitely and run into issues with leaving running/ghost process on
machines which were hard to seek out and kill.

In the case of windows or shops that have windows stations there is
also the problem of the limitations of the dos shell not managing env
vars correctly, or really being flexible enough to allow for any kind
of reliable inheritance for variables to be passed to the process you
start. We came up with some solutions that would be resemblant of a
linux setup with some trickery using different built-ins, but
overcoming the culture of starting everything from the desktop has
been a battle... the end result of which is why I suggested a
virtual-console or virtual-desktop. Using the asset picker to be
responsible (via dependency) for setting the variables for JobSeqShot,
MayaVersion, etc, and then supplying values to the commands we use to
launch our shell scripts has led me to thinking about how to treat
windows env issues in a new way.

Ultimately the version controlling of maya assets, as a pipeline task,
is greatly simplified when you only look at what you are trying to
assemble; and more often than not that is either data for simming
(caching) or data for rendering. Working backwards from these two
issues and trying to limit the ammount (or complexity) of that data so
that it can be easily stored and accessed through a database is where
a format like gto starts to really make sense... you can spend a lot
of time trying to make flat binary scene data into auditable assets OR
you can spend MUCH less time tracking data whose integrity can be
easily audited through the pipeline and spend the extra time
optimizing schemas for passing it through the pipe.

> git
>
> I wouldn't say that git couldn't do the job, but unfortunately I am not a
> specialist on it. As far as I could figure out, it stores files much more
> efficiently when they are checked-out locally as it does not store the
> revision base files for each checked-out file ( effectively doubling the
> amount of space used ).
> Also it has a git-server which would allow to have a centralized storage if
> required.
> It appears though that git cannot update/retrieve just one file though,
> which is required to start working locally on a locked file. Speaking of
> which: Is it possibly to lock files in git ? I think its not a built-in
> feature of the system.
>
> So in the end, svn seems to be favorable, although it's not perfect. For me
> it does the job pretty well though.
>

Ill have to look git up, I cant say I know what it is.

> conclusion
>
> That's hell of a topic Hradec poked into. If we could figure out the 'one'
> way to get it right for most of the target users, I would be glad.
> Unfortunately I do not see it coming yet as a thread/mailing list is
> probably not the best way to efficiently gather information as it gets lost
> in the pure mass of lines.
>
> So am I - lost ( for today ;) ).
>

Im sure we will speak tomorrow anyways, goodnight.

Robert

Zac Shenker

unread,
Dec 30, 2008, 10:06:54 PM12/30/08
to python_in...@googlegroups.com
Thanks for all your replies so far, they are providing a great insight for me on workflow of larger productions. So far my experience has only been working on smaller productions with 3 people at University.

I have been thinking about another approach for network fileshare based setups:
1. The fileserver keeps an up-to-date copy of trunk.
2. When a user needs a set of files locally they select the files they need and a branch is created with these files.
3. The new branch is checked out to the local machine.
4. When work is completed the changes are committed to the branch
5. branch is merged into trunk.

We could also lock the files on trunk that we are copying to the branch and unlock them when we do the merge.

The idea would be to try and automate as much of that process for the users as possible so that all they have to do is select the files they need to work with.

Zac

Chadrik

unread,
Jan 12, 2009, 2:42:09 AM1/12/09
to python_inside_maya

i spent some time over the past few weeks researching various open-
source version control apps for use in vfx. thought i'd throw you all
an update with my findings. as i explored different options and
thought about the big picture, i came up with some features that i
considered necessary and/or preferable.

---prerequisites---
free or very cheap ( perforce is $900/user x 100 users = $90,000 = non-
option )
cross platform
python api
fast performance with binary files
configurable to conserve disk space
- ability to easily remove unneeded files from repo (aka
'obliterate')
- limited file redundancy

---bonus---
no recursive special directories ( like .svn directories )

much of the prereq's are based around the notion that we'll be dealing
with some very large files. we want to avoid replicating them all
over our server because redundancy is a waste of disk space, network
traffic, and copy time.

so, what were my conclusions? subversion simply won't work. here's
why:
while subversion's python api seems quite top notch, subversion itself
fails pretty miserably when it comes to binary performance and disk
space usage. it stores all files in the repo using a delta algorithm,
meaning each file is stored not as a whole file, but as the difference
between itself and the previous commit. this has the advantage of
saving disk space and of always having the diff on hand. however,
calculating a delta for many large binary files -- and then later
merging deltas to reform complete files -- takes prohibitively (read:
insanely) long. take a look at this article for some performance tips
and figures: http://www.ibm.com/developerworks/java/library/j-svnbins.html.
unfortunately, their solution is to use svn's import and export
commands, which store and retrieve binary files whole and
uncompressed. the problem is that you don't get any version control
on those files, so what's the bloody point?

the second major failing is disk space usage. the delta algorithm
saves space, but that space savings is far outweighed by several
failings. first of all, every file you check out is stored twice.
yep, EVERY file. in addition to your working copy it keeps an extra
copy in the .svn directory so that IF you edit the file you can do a
quick, offline diff. there's no way to turn off this "feature". so,
if you're checking out 500GB of data, it's gonna be more like 1GB.
all that extra disk space used up in every working copy is almost no
benefit, because diff's between binary files are useless without a
custom app to interpret the data. last in the disk space category, if
a user accidentally checks in 100GB of cache data, or lets say, you're
repo is getting very large and you want to wipe out some old versions
of an asset that you know aren't being used, you cannot do so without
going through some extreme pain. you have to use `svnadmin dump` to
dump your entire repo to a text file, then use dumpfillter to filter
through your data and remove what you don't want, then rebuild your
repo. this process can take many hours if your repo is very large.

the last part is a pet peev, and that's the recursive .svn
directories. these are annoying to deal with because if you decide to
switch out some directories in your working copy with some others of
the same name and you expect it to simply use the new ones in their
place, it won't work. you have to copy over all the .svn folders from
the original into the new set. imagine how well this will work with
artists! you would have to write scripts for moving and modifying
these .svn directories and the artists would have to reliably use them
instead of just dragging and dropping directories or the system would
break down.

i was pretty disappointed to finally come to this conclusion about
subversion, but the fact is that it does what it's mean to do well,
and managing large binary datasets is not what it's meant to do. so,
i moved on and began applying my criteria to pretty much every
revision control system i could find ( using this list:
http://en.wikipedia.org/wiki/Comparison_of_revision_control_software
). most are cvs/svn derivatives with no real advantage in feature
set. i ran away from anything that used delta compression on binary
files, and at first i shied away from distributed systems because of
what i read in the mercurial manual:

" Because Subversion doesn’t store revision history on the client, it
is well suited to managing projects that deal with lots of large,
opaque binary files. If you check in fifty revisions to an
incompressible 10MB file, Subversion’s client-side space usage stays
constant The space used by any distributed SCM will grow rapidly in
proportion to the number of revisions, because the differences between
each revision are large.
"

essentially, if you have a 500GB repo, then that 500GB is copied to
every working copy. ie: mercurial is worse than subversion with
binary files ( and subversion is already pretty bad with binary
files ). i shouldn't write off mercurial, though, because with the
right features, it still might be viable, because as i shortly
discovered, my favorite option ended up being a distributed system....

that system is "git". so far, i think it has the most potential of
anything i've seen. it's distributed, but very flexible and has many
different models for revision control, plus a lot of options to help
save disk space / network traffic. it can even be configured to work
like cvs/svn, if that is your desire. the project was started by
linux torvalds, and as he put it: "It's not an SCM, it's a
distribution and archival mechanism. I bet you could make a
reasonable SCM on top of it, though. Another way of looking at it is
to say that it's really a content-addressable filesystem, used to
track directory trees." ( taken from this helpful site:
http://utsl.gen.nz/talks/git-svn/intro.html )

the python api is provided by a 3rd party, which is a bit
disappointing (ironic, coming from the guy who started pymel), but it
exists and looks object-oriented enough. git doesn't use delta-
compression, the amount of history copied from a repo can be limited
or even shared via hard links, it has the ability to prune old
commits, it has an option to pack away commits that are no longer used
into even great compression, and it doesn't use annoying recursive
directories.

i haven't begun using git in a real-world test yet, but if you're
looking for something to base a pipe on, this could be the horse to
bet on. ultimately, i would really like to start an open-source
asset management project, so take a look at git and see what you
think. i'll let you know as i find out more. i haven't done a speed
test on a large image sequence yet, that could still be a deal-
breaker, but so far it "feels" fast.

-chad







Matt Estela

unread,
Jan 12, 2009, 3:01:42 AM1/12/09
to python_in...@googlegroups.com
fantastic research, thanks for all that hard work chadrik!

Zac Shenker

unread,
Jan 12, 2009, 9:39:19 AM1/12/09
to python_in...@googlegroups.com
I will post a full response in the next few days when I get a chance, I am away at the moment.

git is quite a fast version control system, but it raises its own issues. As it is distributed there is no method of locking files like you can with SVN, this could create a number of issues when multiple people commit the same files.

I am yet to find a version control system that handles binary files in a very elegant manner.

The approach that I am quite liking the idea of for larger project is producing branches with the appropriate files as they are needed and then checking out that branch so that you only get the files you need to work on. When you are done working the branch is merged back into trunk. If you want to avoid conflicts you could lock the files on trunk.

In this case you may then want a network share that always has a checkout of the the up-to-date trunk.


Regards,
Zac

Chad Dombrova

unread,
Jan 12, 2009, 11:54:16 AM1/12/09
to python_in...@googlegroups.com
file locking is not a show-stopper like the problems i have found with
svn. it's really just a matter of communication and this is something
that we can easily implement in a our own custom setup based on an app
like git.

here's a question-answer session about file locking in git: http://stackoverflow.com/questions/119444/locking-binary-files-using-git-version-control-system
. many people favor the same branch configuration that you suggest.

-chad

chadrik

unread,
Jan 12, 2009, 1:57:21 PM1/12/09
to python_in...@googlegroups.com
i got an off-list email about some possible pitfalls with git and i'd like to post my reply because it clarifies some stuff i glossed over in my last email  for the sake of keeping it under 10 pages :)

Chad,

I'd be interested to know what you discover.  I've only worked with git a handful of times, but from my understanding it might not be the perfect solution that you might hope.

i'm not a pro yet either, but i'll try to respond to these with what i *think* i know, thus far.


For one, it's currently very difficult to make git update anything less than the entire repository at a time.  You cannot, for instance, update a few files while holding some other files back while you're still making changes.

every working copy is its own repository.  there is no notion of a central repository.  when you perform an update in git, you are only updating your repo, and i believe you are correct about not being able to choose which files you update.  however, it takes another step to update your working files from your repo, and i believe that you can do this per file.  will double check on this.



Secondly, as I understand it, although this might just be an aspect of using the git-svn bridge, your 'checkout' will actually include the entire change history of the repository.  Which means you not only have two copies of each file (as per your point about svn) but many many times more.

git provides a lot of control on this front, and its the reason i changed my mind on distributed version control, at least in git's case.  there are 3  options which alleviate this problem in different ways. when you clone your repo from a source repo you can specify:

--local:  object files stored in .git will be hard links to those in the source repo (must be on the same file system)
--shared:  don't check out a repo at all, just use source repo (source repo must be at all times accessible and maintainer of source must not perform any cleanup that will corrupt cloned repos )
--depth:  specify how many revisions to checkout.  if you choose HEAD, then you only get the most recent versions of each file

this discussion goes into great detail on the differences between shared and local, but it'll be awhile before i fully understand them: http://kerneltrap.org/mailarchive/git/2007/6/4/248230

i don't want to come across as a git poster-boy, because i still haven't given it a real-world test yet, but on paper it has all the features that i'm looking for, so here's hoping....


-chad

Sebastian Thiel

unread,
Jan 13, 2009, 2:34:38 PM1/13/09
to python_in...@googlegroups.com
Generally I agree with Chad's choice to use git as FileSystem behind an Asset Control System as it is fast, reliable and deals rather well with binary files. Gits storage mechanism is very efficient at least.

Efficient handling of binary files is a main point, but as it has already been pointed out, there are other important features as well. I picked subversion because of the following reasons over git ( and back then, I had the choice ):
  1. SVN is very well known ( to me )
    • Git will cost a lot of research and probably lot's of custom programming
  2. SVN runs perfectly on linux and windows, whereas windows also offers a nice explorer integration
    • Unfortunately, I need to support windows as well. As maya runs on windows and osx in addition  to linux, an asset management system would need to address all platforms
    • Some of gits advanced features appar to rely on characteristics of a linux file system that are impossible to get on windows ( hardlinks could be simulated with junctions, this has been mentioned in one of the previous posts as well )
  3. SVN has a python module which is known to work
  4. File Locking
    1. Native and fast in svn
    2. git would need that to be handled by some service controlled by the asset management system I suppose, so it's custom development here
  5. Partial Checkout/ Partial Update
    • Its important that people may checkout/update only the portion of the repository they actually work ( and have a lock on ) - in my case it's individual files most of the time.
    • Its possible to retrieve only a single file for working
      • In git apparently one has to get the whole repository ( which can be prohibitive ) , but perhaps some special workarounds exist
    • Also I agree that the reason I need this might heavily be based on my local->server paradigm - other more 'gitty' paradigms might not need this at all - but yet I cannot imagine how branch based workflows might work, and how branch merging would perform without conflicts.
  6. ( ACLs )
    • At least available in svn, but currently not used in my production as there is just no need
    • Could be useful in case of distributed productions
  7. Distributed Productions
    • Due to the centralized nature of svn it would naturally be capable of serving many locations, but without further adjustements it would be too slow for everyone not connected with at least 100 MBit
    • SVN Mirroring could be setup, but this would require a custom solution for a lock synchronization as well
    • git might handle that part more efficiently due to its network optimizations and it's distributed nature ( if merging would work safely )
    • In general, things get easier for both svn and git if one can assign logical portions of the production to one location that must just be pulled or updated by the main company from time to time - thus different locations completely own portions of the project, making individual file locks unnecessary. 
My conclusion back was and still is that one will get good results faster if relying on svn, but one apparently has to take some caveats which cannot easily be workarounded.

(NOTE: SVK apparantly does it better here and there, it uses svn as backend and goes deep into the api so it actually workarounds the 'file duplication' when checking out files ).

SVN and Binary files

Yes, it is an issue. Every file is held twice in the repository checkout - the server side will thus contain twice as much information, updating files is slower as at least two possibly large files have to be handled.

Also I noticed that the delta algorithm appears not to be in effect with binary files or usually has no visible effect on files over a certain size. To me it felt as if it would always store PSD files ( for instance ) as a full copy, even though you literally just changed one pixel.

Transfer speed of binary files is rather slow - I get between 5-7 MB upstream. Updating large binary files is not fast either - there is a server side bottleneck due to the immense overhead imposed by the delta's. The more revisions have to be retrieved, so worse it is.
A complete checkout of a repository with large binrary files is quite slow and far away from what I would like to have.

Git would probably score here ( at least it would be faster ).

In practice, checkout's are much slower than updates, fortunately you update most of the time.

Importing binary files is rather fast as SVN doesn't have to do much more than compressing it, so it is not so much of a problem.

For me, the biggest caveat truly is the waste of diskspace on the server - the slow binary performance only really shows when committing 100MB+ - everything else is feels acceptable on todays hardware.

Asset Management and Branching

I don't really know how branches would properly be implemented in an asset management system and plenty of binary files. Merging algorithms will fail on binaries ( and complex asciis like ma ), but if specialized, it could work well ( see .gto format ).

Having different versions/variations of the same file around is a good thing though, but currently I catch that through the folder structure and conventions that are enforced by the system. SVN does not know about it though.


Finally ...
I absolutely think that git could do it if one writes some new porcelain for it that enforces the constraints of an asset management system, but from what I see it will be a great effort ( good things never come easy though ! ).

I will try to switch my own development to git as far as possible, still piping changes to svn in the end ( using git-svn ). This way I should learn many interesting things about it that might help me to contribute some more useful information to this topic in future.

Jo Jürgens

unread,
Jan 15, 2009, 7:50:57 PM1/15/09
to python_in...@googlegroups.com
Some user experience with SubVersion...

We've used Subversion on several feature films (Free Jimmy, Kurt Turns Evil and another one currently in production). Integration with Maya is done through Mel scripts that pass system commands to the SVN client, and commiting is integrated with the save and save as commands, so that every time the user saves a version controlled file, a promptDialog pops up, asking if he or she wants to commit.

Our SubVersion setup does not use locking. If you have two artists working on the same file at the same time, you have a production management problem, not an SVN problem. If several people do work on the same asset, you most definitively want to have the revision history for both of them, and then you either let the two artists in question fight about whose version wins, or merge the differing work manually in Maya.

We have a Python script running all the time that tracks what files are open on every machine in the building, and warns the second person attempting to open a file, or an asset (if someone is working on the model file, you get a warning when opening the UV file, etc). I think this is a better solution, as it also covers files that are not SubVersioned (you dont want to have people save over those either), and files don't get stuck when Maya crashes.

We do not use merging either. I don't see how helpful that could be when dealing with files that may contain hundreds of thousands of lines of text.

The Subversion log is stored in a database, so it can be integrated in production management web pages etc without having to make slow SVN log requests all the time.

There are several features I want to add to our current setup. Among them...
Better integration with our database tracking system. We have strict conventions for what files are associated with each asset, animation scene etc. Whenever an artist creates a file that is part of the asset workflow, that is logged in the database, and should also be automatically added and commited to SubVersion.

A really user friendly GUI for performing the most essential SVN operations (not even TortoiseSVN is that user friendly from an artist's point of view). This would automate most processes, like adding, renaming, moving files; reverting/saving out revisions, etc.

Automatic updating of assets when working with local checkouts. This could possibly be running in a thread at regular intervals, and then when the artist opens a file, an additional script makes sure the file and all its references are up to date.

Various working copy repair tools.
SubVersion is way to buggy when the working copy resides on a network share. You'll commit a file and everything is dandy, but somehow SubVersion thinks that the commit wasn't made, so it tries to update the working copy, making a total mess as it tries to merge the commit with the last save, or it just flat out refuses to commit. Usually, the workaround is to rename the latest noncommited saved file, update the folder so that Subversion puts the file back, delete that file and then rename the correct saved file back again and commit that.

I agree that the way to go is to create something more general that can then be hooked up with Maya, Softimage, Fusion, Nuke etc through Python, and in some way to Adobe software, 3ds max, MS Word etc

Here's how its done in Perforce: http://www.perforce.com/perforce/products/plugins-p4gt.html

The best would definitively be to make it version control, os and database agnostic, so one can use any combination one would like of Windows, Linux, SubVersion, CVS, Perforce, MySQL, SQLServer and so on.

Jo Jurgens
Senior TD
Qvisten Animation
Oslo, Norway

Chad Dombrova

unread,
Jan 24, 2009, 2:32:32 PM1/24/09
to python_in...@googlegroups.com
hi all,
for those interested in being involved in an open-source asset
manager, would the necessity of installing cygwin or MinGW/MSys on
windows be a deal-breaker?

btw, Jo, thanks for the additional insight into using Subversion in
production. you hit on some interesting topics, particularly, the
need to design an assset management system with hooks for a custom
tracking database, or perhaps even a simple default tracking database
for those who are starting from scratch.

-chad


Olivier Renouard

unread,
Feb 2, 2009, 3:09:57 AM2/2/09
to python_in...@googlegroups.com
Hi,

Didn't have time to take part in the discussion earlier though it's very
interesting.

Isn't there an issue with SVN too, that you can't so easily drop data
from a repository? As a production advances and number of versions grow,
it's nice to be able to drop old versions (or rather unreferenced old
ones, ones that are no longuer referenced by any asset).

In our studio we ended up developing a SVN replacement that is
specifically geared towards manipulation of 3D assets / large volumes.
The author based it on PostgreSQL and added things like support for
redundant servers. It works on a "lock/release" basis which sounds good
theorically but is actually not that evident to stick to in a real
production. It's lacking several features, like a way to handle
distributed repositories (like for two different sites working together)

Wish the project could have gone open source but since it hasn't, I'm
interested to see if open source alternatives can develop. Will probably
not be able to look at these things until about one year from now though.

Olivier
--
Olivier Renouard

Zac Shenker

unread,
Feb 2, 2009, 5:25:15 AM2/2/09
to python_in...@googlegroups.com
Hi all,

Sorry I haven't replied in a while, I have been away on holidays and was at linux.conf.au the other week.

I went to a talk at linux.conf.au about Google's additions to svn and talked to the presenter afterwards about using SVN with very large projects, he suggested possibly git or really better would be just custom building your own solution.

At this stage I am leaning towards taking a further look at using git and probably building an asset management system that you use in conjunction with git for the version control.

Regards,
Zac Shenker

Chad Dombrova

unread,
Feb 2, 2009, 11:46:31 AM2/2/09
to python_in...@googlegroups.com
hey olivier!

long time no hear.

> Isn't there an issue with SVN too, that you can't so easily drop data
> from a repository? As a production advances and number of versions
> grow,
> it's nice to be able to drop old versions (or rather unreferenced old
> ones, ones that are no longuer referenced by any asset).

yes, this was one of my biggest complaints. (see earlier rant)

> he suggested possibly git or really better would be just custom
> building your own solution.


i have a friend who is right now looking at modifying the git source
code to allow it to behave as a symlink manager. looks promising so
far. i'll let you all know how it turns out.

-chad

Farsheed

unread,
Feb 16, 2009, 2:09:22 PM2/16/09
to python_inside_maya
The scenario is Alice and Bob are both making changes to the same
binary resource at the same time. They each have their own local repo,
cloned from one central remote.
This is indeed a potential problem. So Alice finishes first and pushes
to the central alice/update branch. Normally when this happens, Alice
would make an announcement that it should be reviewed. Bob sees that
and reviews it. He can either (1) incorporate those changes himself
into his version (branching from alice/update and making his changes
to that) or (2) publish his own changes to bob/update. Again, he makes
an announcement.
Now, if Alice pushes to master instead, Bob has a dilemma when he
pulls master and tries to merge into his local branch. His conflicts
with Alice's. But again, the same procedure can apply, just on
different branches. And even if Bob ignores all the warnings and
commits over Alice's, it's always possible to pull out Alice's commit
to fix things. This becomes simply a communication issue.
Since (AFAIK) the Subversion locks are just advisory, an e-mail or
instant message could serve the same purpose. But even if you don't do
that, Git lets you fix it.
No, there's no locking mechanism per se. But a locking mechanism tends
to just be a substitute for good communication. I believe that's why
the Git developers haven't added a locking mechanism.

Reference: http://stackoverflow.com

Sincerely,
Farsheed.

Farsheed

unread,
Feb 16, 2009, 2:26:08 PM2/16/09
to python_inside_maya

Sebastian Thiel

unread,
Feb 16, 2009, 2:52:18 PM2/16/09
to python_in...@googlegroups.com
>>But a locking mechanism tends to just be a substitute for good communication.<<
I just have to pick this up quickly and state that ( human-based ) communication cannot in fact not replace a ( server based ) locking mechanism. During my everyday work it happens that I have to fix scenes or adjust them. People would never ( and should not ) send me a message everytime before they start working on a file just because I could possibly anytime want to make changes. It's just not feasible, even for people within one team.
Locking files based on human communication will suffer from a great deal of human error as well, and will not be usable.


As long as you cannot diff the files you manage at all or at least not in a meaningful way, exclusive write access to files is required. Even with a perfect production management there are still guys like me that at least want some notification if some other person is currently registered to work on a file - without a server based system that would not be possible.


 But this is just my opinion based on everyday experience - others might have made other experiences that they might want to share here.

Regards,
Sebastian

chadrik

unread,
Feb 16, 2009, 3:02:59 PM2/16/09
to python_in...@googlegroups.com
sebastian,
what size studio do you work for?

our studio varies between 50-80 total. we have no locking mechanism
and it has never really been a problem. we use referencing or
importing/exporting within maya to bypass these kinds of issues. for
example, if we have a very large set to build, we break it up into
logical chunks and assign each chunk to a different artist. when
necessary, the artist publishes his part of the asset into the
reference stream. the lead artist for the asset works on a master
asset that references in the published sub-assets. the lead is
responsible for QC and changes that happen across the entire asset.
we use a similar strategy for large animations, but using importing/
exporting instead of referencing.

perhaps for larger studios it's much more of an issue....

-chad


Olivier Renouard

unread,
Feb 16, 2009, 3:34:05 PM2/16/09
to python_in...@googlegroups.com
Hi,

Well my experiences differs now that I've been using a lock based
versioning system here for our 3D assets (scenes, textures, etc) for
some time.

1) Locks on local files are a nuisance

Often you'll need write access to some file even though you don't plan
to reintegrate the modified file. Caches, shadow maps, etc. Sometimes
you can't render / work without that. If the problematic file is just a
reference of the scene you're actually editing, and you plan to only
push the main scene back, it won't break anything, but if someone locked
it then you're out of luck.

People presented with this roadblock will usually just copy the local
files to work. Then you're out of the versioning benefits. You actually
created a "local, uncharted, unmaintained" branch, and situation is
already the worst case scenario a versioning system can give you.

Conclusion : I just keep chmod 777 -Ring the hell out of my local
repository everytime I fail a test render because of a silly local lock.
For me they are just an annoyance.

2) You can't always respect the exclusive rule.

As much as people taking the lock on a file in turn is the theorically
correct process, in practice it seems to me it often turned out as not
doable. To be able to work in parallel you can set up as much as you can
using the tools Maya offers, like references (one does uv editing in
scene A, you reference scene A in scene B and shade there, etc...),
still it has it's limits. At many points I found you had to work in
parallel on same file, export and reintegrate, ie manually merge. If the
versioning system doesn't allow that, then again people will just make a
copy, and version that as a variant of the original scene. Back to the
branch / merge case except again it's even worse as there is no
information of what version and when this scene was branched from.

Conclusion : I'd rather get a warning, and as a result of pushing the
asset have a branch be created. That way I know I got to reintegrate the
changes at some point, and the information of what it branched from is
preserved. Merging can't be as efficient for Maya scenes than for code.
Then again, a Maya scene is pretty much code though, typical production
scene structure is often "load these refs, apply these commands" and in
many cases you can extract diff patches that will help merging. Locks
could be there, but as warning posts. Possibly make them "hard locks"
for some people and "soft locks" for other depending on some
accreditation level.

Would love to hear thoughts of more as well !

Olivier
--
Olivier Renouard

Farsheed

unread,
Feb 16, 2009, 4:39:25 PM2/16/09
to python_inside_maya
Have you ever seen this video? http://www.youtube.com/watch?v=4XpnKHJAok8
Highly recommended.

-Farsheed.

Farsheed

unread,
Feb 16, 2009, 4:43:00 PM2/16/09
to python_inside_maya
Just want to mention that Please study this page *twice* :) with it'
comments.
http://stackoverflow.com/questions/119444/locking-binary-files-using-git-version-control-system

-Farsheed.

Sebastian Thiel

unread,
Feb 17, 2009, 4:16:25 PM2/17/09
to python_in...@googlegroups.com
The studio with it's size of about 20 to 25 people is rather small.This is perhaps why plenty of "production management" ( unfortunately ) is only done using IM, people talking or emails here and there. It's rather self-regulatory.

I don't think size matters here ... ( yeah, take your comments ;) ).

Even though it might only be me experiencing locks on files as I am 'outside of production'  in some sense, to me it's clear that I cannot live without.
There where a few times when I had to batch over all our assets ( at night ) and republish them to get in some changes. Some of the files would fail to publish as a lock could not be retrieved. Did they not check-in the file before they left ? Or did someone leave the scene open over night ? Or perhaps someone is a little paranoid and doesn't want to release the lock yet. No matter which reason makes the hit, its good that the batch failed instead of succeeding just to be overwritten next day by the guy actually working on the file.

As >>it has never really been a problem<< in your studio I assume these issues arised some times and people are dealing with it or work around it. Locks could prevent these issues to arise in the first place.


Speaking of which, Oliver came up with some issues with locks [ see 1) ]. To me it reads as if he has in fact issues with how the locks are used in practice. These locks locked him more than they are helping.
Locks are there to prevent unmergable files in revision control systems to be exclusively written to by one party. Sounds good to me to prevent further damage.
In the moment things get more complex and more than one lock is involved, it will be troublesome and error prone for the user to handle. The issues described in 1) are bypassed in this studio by the notion of resources. Scenes have resources, these can be anything from geometry cache to a shadow map or texture file. They are copied/updated from the network upon file open to a local destination, following the filestructure used in the main project path. There they are writable and easily accessible, all with just one lock which is the main scene you work on.
Only during publishing, when you actually want to commit your changes to the world, locks will be retrieved for all changed resources to update them in the repository and commit them. If on lock cannot be retrieved, the whole operation fails, and you finally have the unpleasant situation described in 1). This can possibly happen if a machine crashes during a commit, and automated straying-lock cleanup strategies should come into play then.

2) involves branching and merging of scenes, which is what I cannot directly support as well. Although, svn in my case, can keep track of copies ( and locks ), that's pretty much what it can do for you. The person, the one holding the lock on the main file, would be responsible for the reintegration of your additions into the master scene.
To me these situtations should be special cases.

my conclusion:
If one does not have a system with lock-support yet and/or no good integration into your main software packages to make it userfriendly, it might be less hastle not to use it and live with occasional file overwrites destroying someones work.
In the end it might be about choosing the smallest nuisance.
In the moment bots come into play or people batching their way through your files, you must use a locking system ( and a way to handle crashes and straying locks ).
As I have worked in just one studio without locking system, and a few others that had one, I can recall that file accidents happend quite a lot,but could always be fixed by numerous backups that where lying around somewhere ( studio size 45-60 people ). People didn't scream anymore if something happened, they just fixed it silently ( the same way you restart maya if it crashed without dropping the line of conversation you have with your colleague :) ).
The studios with a locking system had worse and better integrations into their daily workflow. In the first kind, they where a burden one would feel everyday ( but your tears dried over the month ) and occasionally they would save some of your time. In the latter ones, what I really feel is the timesaving aspect.
Also I get this smile in my face if my 4 cores >safely< do work in 2 hours that would have taken days the manual way.

Regards,
Sebastian
[ who waives the pro-lock pennant ;) ]
Reply all
Reply to author
Forward
0 new messages