New to Rez

2,460 views
Skip to first unread message

Tony Barbieri

unread,
Jun 11, 2014, 9:56:51 AM6/11/14
to rez-c...@googlegroups.com
Hello!

My name is Tony Barbieri and I've been working on a Pipeline at Psyop over the last couple of years.  I'm looking to overhaul (or basically implement) our deployment and build systems.  Rez looks very interesting and I think would fit the bill for us.  I have a few newb questions about how I should go about integrating it with what we already have.  I'm also looking to jump right into the 2.0 branch as it seems like there are a lot of changes happening in there.

Some details about our facilities:
  • We use Shotgun and Shotgun's Toolkit framework.
  • We are primarily Windows, but do have some OSX and Linux.  Will most likely move more towards Linux in the future but that is very much TBD.
  • Currently 90% of all of our tools are currently written in Python.  The other 10% would consist of C++ projects, mostly other Open Source projects such as OpenImageIO or Arnold shaders.
  • We have a "bootstrap" process that puts artists into their project context.  This was all written using bash and python.  Even though we are primarily Windows, we do run bash using the shell provided by Git.
  • We have all of our code in various git repositories hosted internally using Gitlab.
  • We are multi-office.
I suppose my first off I'm looking for suggestions about how best to begin integrating Rez, where I would start and what responsibilities Rez would have.  Below I've outlined some of the components we already have in play which may help understand our current ecosystem.  I apologize in advance for the large amount of detail...

Bootstrapping

Currently, as I mentioned above, we have a bootstrap process that will drop users into a sub shell based on the project and "branch" the user has requested to set context to.  A "project branch" is basically a fork for the project configuration and code that allows a user to run a different code base than what the "primary" branch is running.  The primary branch is the default branch and is set when a branch is not specified.  Project branches are mostly used for development work.  Once in the sub shell, our bootstrap process has exported environment variables that can be used to query the various project attributes such as name, location, etc.  It also set's up additional values in the PATH and PYTHONPATH variable.  PYTHONPATH is squashed so the only path in there points to our sitecustomize.py file which will handle doing any additional bootstrapping when a python instance is initialized.  We use a special environment variable PSYOP_PYTHONPATH to act as our PYTHONPATH as we want the paths set to come after the default python packages (in most situations).  If we don't do this we've found python to be sluggish when importing due to stat'ing over folders on our network filesystem which can be quite slow on Windows.

The way our bootstrap code basically works is we have some entry point bash scripts that eventually call project specific bootstrap code which offloads most of the work to python.  Python then prints a bunch of bash formatted code which the bash sub shell will evaluate.

A set context command looks like:
sc tbarbieri_0001p -b test_branch

My first question is, where should Rez fit into this?  Is Rez meant to fill the role our current bootstrap process? Should our bootstrap code execute a Rez command?  I would imagine the second scenario as Rez will have no concept of our projects, project file system structure, etc. and we'll want to version Rez per project so updates to it won't affect projects currently in production.  I imagine I will need to setup a Rez environment per project and ideally this will be automated as our project initialization code is all automated and is driven by various project management tools (Shotgun for example).  So initially I assume it would use the latest global config and then get tweaked/upgraded as a project's life goes on.

Deployment

Currently we don't have any real deployment to speak of other than a human ssh'ing into a Linux VM and updating repositories.  This was fine to get us up and running as quickly as possible but as expected it is becoming unmanageable.  Some of our packages are versioned, others are not.  I'd like for all of our code base to be versioned appropriately.  All projects reference code from our global checkouts living on a network filesystem.  I'd like for us to move to a system where each project has local checkouts of the code configured to run for them, but in order to do so we need a system to easily update the project's code base when necessary ( upgrading to newer versions, important bug fixes to existing versions, etc.).  I believe this is definitely a place Rez would come into play but I'm not 100% certain how these updates would occur: pushed to each project or pulled to each project through some call to a command performed by a user.  Is there a paradigm that Rez prefers?  Is it typically left to a TD or developer to set context to a project and pull any updates to the local Rez config?

We typically have many projects going on simultaneously so we'll need a way to manage all of these code bases easily as we have a very small development team.  Any suggestions about how Rez could best be used in this situation would be much appreciated.

Application "Addons"

We have written an API that manages launching and configuring applications per project.  Essentially it sets up any environment variables or pre-launch configuration an application requires, allows us to set what version of software should be launched while in a project context and allows us to configure what "addons" and addon versions are loaded for that project (Maya modules, plugins, script directories, Nuke plugins, script directories, Houdini OTL's, etc.).  Currently these also live in a global network location but we've been discussing "installing" them into a project and trying to find ways to minimize the number of paths required in the environment to help with application startup times.  I imagine Rez could handle some of this as well, does it have any notions about how to setup an environment per application?  If so, I imagine we could integrate Rez into our "launch" API rather than our current "addons" configuration manager.  We've been looking to overhaul it anyhow and add proper platform support.

Toolkit

Shotgun's Toolkit has it's own deployment manager and configuration setup per project.  This is a long shot but it could be interesting to see if Toolkit's deployment hooks could be abstracted to leverage Rez.  Ideally we'll have a single API and system for managing building and deployment.  It's something I could work on or bring up with the Shotgun folks as I get more comfortable with how Rez works.  I'm pretty familiar with the Toolkit framework at this point and have contributed to the core so I am familiar with most of it's internals.

Windows

I started to modify some of the Rez code to get it to run under Windows using the Git Bash MSYS shell.  So far I think I have it executing correctly (There will be some more areas that I'll have to mess with due to the way the different shells (MSYS, Cygwin) deal with paths and especially the PATH environment variable).  Once I am more familiar about how I should go about configuring and getting Rez setup, I'll continue testing these modifications and eventually create pull requests once I'm sure I have things stable.


Again I apologize for so much detail.  Hopefully understanding some of our current systems will help determine how and where Rez will fit in.  Any advice or tips would be very appreciated.

Thanks for your time and for creating Rez!

Best,

--
Tony

Chad Dombrova

unread,
Jun 11, 2014, 1:17:04 PM6/11/14
to rez-c...@googlegroups.com

Hey Tony,

Bootstrapping

Currently, as I mentioned above, we have a bootstrap process that will drop users into a sub shell based on the project and "branch" the user has requested to set context to.  A "project branch" is basically a fork for the project configuration and code that allows a user to run a different code base than what the "primary" branch is running.  The primary branch is the default branch and is set when a branch is not specified.  Project branches are mostly used for development work.  Once in the sub shell, our bootstrap process has exported environment variables that can be used to query the various project attributes such as name, location, etc.  It also set's up additional values in the PATH and PYTHONPATH variable.  PYTHONPATH is squashed so the only path in there points to our sitecustomize.py file which will handle doing any additional bootstrapping when a python instance is initialized.  We use a special environment variable PSYOP_PYTHONPATH to act as our PYTHONPATH as we want the paths set to come after the default python packages (in most situations).  If we don't do this we've found python to be sluggish when importing due to stat'ing over folders on our network filesystem which can be quite slow on Windows.

Rez builds a dependency graph of “packages” based on their “requirements” and then executes commands contained within those packages in a target language/context (bash, tcsh, python, etc). A “package” can correspond with an application, library, or just a logical grouping of commands.

So, for example, you might create a base set of packages like this:

name: psyop
versions:
- master_branch
- beta_branch

commands: 
  # setup the base pipe env 
  # (can use switches based on the version, or use the version as a {variable})
name: python
version: "2.7"
variants:
- [ platform-windows ]
name: psyop_python
requires:
- psyop
- python

commands: 
  # set psyop-specific python env vars
name: maya

versions:
- "2014.01"
- "2014.59"
- "2015.00"

variants:
- [ platform-windows ]

requires:
- python-2.7

commands: 
  # set essential maya env vars like MAYA_LOCATION
name: psyop_maya

requires:
- psyop
- maya

commands: 
  # set psyop-specific maya env vars
name: project
version: coke

requires:
- psyop-beta_branch
- psyop_maya
- maya-2014

commands:
  # setup the coke env

with this layout we could then request a shell with an env for the coke project using rez env project-coke. notice that project-coke requires both psyop_maya and a specific version of maya. That’s because pysop_maya is basically a “pass-through” package for maya which does not specify any version requirements. rez takes version requirements at all levels into consideration to find a best match, so since project-coke requires version 2014 of maya, and psyop_maya did not specify any version at all, the latest compatible version is used: 2014.59.

this might not be exactly how you want to set things up, but learning how to make granular packages was an epiphanous moment for me, so I thought I’d pass along these concepts early. the great thing about this is that you can run rez env maya to test maya without any of your studio’s changes, which is very handy for troubleshooting.

The way our bootstrap code basically works is we have some entry point bash scripts that eventually call project specific bootstrap code which offloads most of the work to python.  Python then prints a bunch of bash formatted code which the bash sub shell will evaluate.

This is similar in concept to what rez env does.

 Is Rez meant to fill the role our current bootstrap process?

if your bootstrap code is exporting env vars in bash based on information acquired from python, then in a lot of ways, yes. the “commands” section of each rez package is python, so it can do some pretty sophisticated things. however, you can also use rez as a python api, so your bootstrap code might use a database to find out what project a user is assigned to, etc, then correlate that info to rez packages and call rez env to setup your environment. it’s up to you to determine how much you want to do in the commands section (typically things that don’t require knowledge the “big picture”) vs outside of rez in your custom bootstrap code.

Deployment

Currently we don't have any real deployment to speak of other than a human ssh'ing into a Linux VM and updating repositories.  This was fine to get us up and running as quickly as possible but as expected it is becoming unmanageable.  Some of our packages are versioned, others are not.  I'd like for all of our code base to be versioned appropriately.  All projects reference code from our global checkouts living on a network filesystem.  I'd like for us to move to a system where each project has local checkouts of the code configured to run for them, but in order to do so we need a system to easily update the project's code base when necessary ( upgrading to newer versions, important bug fixes to existing versions, etc.).  I believe this is definitely a place Rez would come into play but I'm not 100% certain how these updates would occur: pushed to each project or pulled to each project through some call to a command performed by a user.  Is there a paradigm that Rez prefers?  Is it typically left to a TD or developer to set context to a project and pull any updates to the local Rez config?

if i understand you correctly, this is not something that rez currently supports, though it may provide what you need in other ways. rez contains a build management tool that releases to a centralized repository. it will help you to create and maintain a sane release structure — particularly for compiled projects like OpenImageIO, OpenEXR, Alembic, etc — such that other projects can request various combinations and versions of these packages. “releasing” inevitably means divorcing source code from its git repo, even in the context of a releasing a python module using a rez wrapper around pip, for example.

both the centralization and the releasing sound like they are counter to your idea of git checkouts per project. it might be possible to extend rez for this case, if we can get a clear grasp of how it would fit into the overall ecosystem, but it might also be possible that once you understand how rez works that you can make it work for you. however, in some ways it sounds like you might want a continuous integration engine like jenkins to automate checkout and build processes after a commit is made.

ok, I’ll let you process this info for a bit.

chad.

allan.johns

unread,
Jun 11, 2014, 1:25:15 PM6/11/14
to rez-c...@googlegroups.com
Hi Tony and welcome, see comments below.



On Wednesday, June 11, 2014 6:56:51 AM UTC-7, Tony Barbieri wrote:
Hello!

My name is Tony Barbieri and I've been working on a Pipeline at Psyop over the last couple of years.  I'm looking to overhaul (or basically implement) our deployment and build systems.  Rez looks very interesting and I think would fit the bill for us.  I have a few newb questions about how I should go about integrating it with what we already have.  I'm also looking to jump right into the 2.0 branch as it seems like there are a lot of changes happening in there.

Some details about our facilities:
  • We use Shotgun and Shotgun's Toolkit framework.
  • We are primarily Windows, but do have some OSX and Linux.  Will most likely move more towards Linux in the future but that is very much TBD.
  • Currently 90% of all of our tools are currently written in Python.  The other 10% would consist of C++ projects, mostly other Open Source projects such as OpenImageIO or Arnold shaders.
  • We have a "bootstrap" process that puts artists into their project context.  This was all written using bash and python.  Even though we are primarily Windows, we do run bash using the shell provided by Git.
  • We have all of our code in various git repositories hosted internally using Gitlab.
  • We are multi-office.
I suppose my first off I'm looking for suggestions about how best to begin integrating Rez, where I would start and what responsibilities Rez would have.  Below I've outlined some of the components we already have in play which may help understand our current ecosystem.  I apologize in advance for the large amount of detail...

Yes, 2.0 is a major refactor and it will be much better for you to start there.
 

Bootstrapping

Currently, as I mentioned above, we have a bootstrap process that will drop users into a sub shell based on the project and "branch" the user has requested to set context to.  A "project branch" is basically a fork for the project configuration and code that allows a user to run a different code base than what the "primary" branch is running.  The primary branch is the default branch and is set when a branch is not specified.  Project branches are mostly used for development work.  Once in the sub shell, our bootstrap process has exported environment variables that can be used to query the various project attributes such as name, location, etc.  It also set's up additional values in the PATH and PYTHONPATH variable.  PYTHONPATH is squashed so the only path in there points to our sitecustomize.py file which will handle doing any additional bootstrapping when a python instance is initialized.  We use a special environment variable PSYOP_PYTHONPATH to act as our PYTHONPATH as we want the paths set to come after the default python packages (in most situations).  If we don't do this we've found python to be sluggish when importing due to stat'ing over folders on our network filesystem which can be quite slow on Windows.

The way our bootstrap code basically works is we have some entry point bash scripts that eventually call project specific bootstrap code which offloads most of the work to python.  Python then prints a bunch of bash formatted code which the bash sub shell will evaluate.

A set context command looks like:
sc tbarbieri_0001p -b test_branch

My first question is, where should Rez fit into this?  Is Rez meant to fill the role our current bootstrap process? Should our bootstrap code execute a Rez command?  I would imagine the second scenario as Rez will have no concept of our projects, project file system structure, etc. and we'll want to version Rez per project so updates to it won't affect projects currently in production.  I imagine I will need to setup a Rez environment per project and ideally this will be automated as our project initialization code is all automated and is driven by various project management tools (Shotgun for example).  So initially I assume it would use the latest global config and then get tweaked/upgraded as a project's life goes on.

So to give a very brief overview - Rez is primarily a package management system. Packages are self-contained projects that contain a project definition file (usually package.yaml) that describe the project and its dependencies. Given a "request" (a list of packages and versions that you require), Rez runs a resolving algorithm to calculate the "resolve" - the specific list of packages that fill that request. It can then create an environment configured for those packages. Packages contain a set of 'commands' in their definition file, that determine how that package affects the environment it is resolved into. A classic example is a python module, which will usually append itself to PYTHONPATH. In rez-2, a package's 'commands' are written in python using a mini-API that lets you perform operations such as appending environment variables, in a platform- and shell-agnostic way (meaning that your packages are compatible across platforms).

There is also a build system and a release system. The build system just creates the appropriate target build environment(s) for the project, then calls an underlying build system (cmake is the default, but the build system is pluggable in rez-2.0, so you can add support for others). The release system is basically just a build that installs to a different location, and updates the repo associated with the source, tagging it appropriately.

So to answer your question - I try to stress to people that rez is a package manager, not a production environment manager, so yes you are right, Rez is more like the tool/API you would use after your bootstrapping process has happened. There are lots of different ways you could do this. For example in Rez-2, there is something called "suites". A suite can be thought of as like a more general python virtualenv (more general because it can contain any type of package, not just python modules). You can create a resolved environment in Rez, and then store that into a "context file" (.rxt extension). This file can be used later on to resurrect the same environment again. The resolving has already happened so the solver isn't run again - a context file is a 'bake' of a resolve. A 'suite" collects a set of contexts together, places them all into the same directory, and then creates wrapper scripts that expose all the tools available in the various contexts, in a single path. If you have a suite, all you need to do is add its ./bin path to $PATH, and then you can run any of its tools from an unconfigured environment. This is quite a powerful concept and its what we here at Method are going to use in our own production bootstrapping. Basically, in this scenario you would somehow create the suite that you require for each show/shot/etc ahead of time, and then 'source' that suite at bootstrap time by adding it to PATH.

There are other ways you could integrate Rez - you may want to have a more runtime system instead, where bootstrapping into a shot generates a rez "request" on the fly, which is then handed to rez and resolved there and then. Rez is quite flexible, and in 2.0 it has a rich API, so there's a lot you can do.
 

Deployment

Currently we don't have any real deployment to speak of other than a human ssh'ing into a Linux VM and updating repositories.  This was fine to get us up and running as quickly as possible but as expected it is becoming unmanageable.  Some of our packages are versioned, others are not.  I'd like for all of our code base to be versioned appropriately.  All projects reference code from our global checkouts living on a network filesystem.  I'd like for us to move to a system where each project has local checkouts of the code configured to run for them, but in order to do so we need a system to easily update the project's code base when necessary ( upgrading to newer versions, important bug fixes to existing versions, etc.).  I believe this is definitely a place Rez would come into play but I'm not 100% certain how these updates would occur: pushed to each project or pulled to each project through some call to a command performed by a user.  Is there a paradigm that Rez prefers?  Is it typically left to a TD or developer to set context to a project and pull any updates to the local Rez config?


 It sounds like this is where things are a bit different. Rez is based on software as being traditional packages - in order to use a package, you build it from a local checkout, and you release it. You don't consume code that has been directly checked out onto disk somewhere, you are always using packages that have been released (ie installed). How you manage your software as a developer (branching repos etc, versioning up) is up to you. For example, Rez does not set the version of a project, that is up to the developer. This is deliberate, because different types of version updates mean different things, and that can't be automated (is the change a patch, or a major version update?).

Generally what happens is, most packages are 'global', ie shared across the studio. These are released into a global packages path (Rez installs packages to paths, and finds them based on a package search path). If you wanted to have show-specific packages, you would use the same workflow, but you would release these to a show-specific packages path, and when you used Rez to resolve environments for that show, you would have this path on the packages search path.

This build-install-consume cycle makes it really easy to develop and test software. When you're developing, you build and 'install' your package, rather than 'releasing' it. When a package is just installed, it goes into your "local" packages path, typically ~/packages. This path appears at the front of your package search path. To test, you simply use rez-env to resolve a new environment. rez-env will see your local packages and will use them in preference to centrally released packages - but will still pick up any of your package's dependencies from central releases. This means you can easily test your package with other packages for the global site, or specific show, etc.

So basically when it comes to updating projects and so on, Rez takes a backseat. This is deliberate - the goal of Rez was not to establish a paradigm for managing project development, but rather, for facilitating dependency management, deployment and consumption of packages. Different studios will want to manage how they develop and maintain code in different ways, and I think it should be possible to use Rez in these differing environments.

We typically have many projects going on simultaneously so we'll need a way to manage all of these code bases easily as we have a very small development team.  Any suggestions about how Rez could best be used in this situation would be much appreciated.

See above. Here at Method for example, we have separate git repos for each package - we simply rez-build and rez-release from local checkouts of each repo, and use git to manage development branches.
 

Application "Addons"

We have written an API that manages launching and configuring applications per project.  Essentially it sets up any environment variables or pre-launch configuration an application requires, allows us to set what version of software should be launched while in a project context and allows us to configure what "addons" and addon versions are loaded for that project (Maya modules, plugins, script directories, Nuke plugins, script directories, Houdini OTL's, etc.).  Currently these also live in a global network location but we've been discussing "installing" them into a project and trying to find ways to minimize the number of paths required in the environment to help with application startup times.  I imagine Rez could handle some of this as well, does it have any notions about how to setup an environment per application?  If so, I imagine we could integrate Rez into our "launch" API rather than our current "addons" configuration manager.  We've been looking to overhaul it anyhow and add proper platform support.


 What you've described is basically what Rez is all about, hopefully that's been made obvious in my earlier description. The primary goal of Rez from the start was to be able to quickly create target environments configured for the requested packages. A lot of work has gone into this aspect of Rez and 2.0 is especially good at it. The solving algorithm ensures that no package version clashes occur, whilst at the same time providing the most recent version of each package possible.

Rez has a very general approach to resolving environments - it doesn't treat applications as special case for example (nor plugins). Everything is a package, and what combination of packages you put into a single resolved environment is up to you. For example, to create an environment containing nuke and various nuke "addons", would just look like this in rez:

]$ rez-env nuke mynukeplugin-3.2+

Here I am dynamically creating a target environment containing some version of nuke, and version 3.2 or greater of a nuke plugin. The resolving algorithm will ensure that I end up with the correct version of nuke for the plugin that is chosen.

Here's an example of using a suite to create two different app-centric environments. These get stored in the suite, and the tools ('maya', 'nuke' etc) all become available, even though they actually execute within separate environments:

]$ rez-env --output=nuke.rxt nuke mynukeplugin-3.2+
]$ rez-env --output=maya.rxt maya-2013 texture_util-1.0.0 thingo-3+
]$ rez-suite mysuite nuke.rxt maya.rxt  # creates 'mysuite' dir in cwd
]$ export PATH=$PATH:./mysuite/bin
]$ which nuke
./mysuite/bin/nuke
]$ which maya
/mysuite/bin/maya

It's that easy.

Toolkit

Shotgun's Toolkit has it's own deployment manager and configuration setup per project.  This is a long shot but it could be interesting to see if Toolkit's deployment hooks could be abstracted to leverage Rez.  Ideally we'll have a single API and system for managing building and deployment.  It's something I could work on or bring up with the Shotgun folks as I get more comfortable with how Rez works.  I'm pretty familiar with the Toolkit framework at this point and have contributed to the core so I am familiar with most of it's internals.

Myself and others have actually had a meeting with Shotgun people on Toolkit, we were assessing it for use at our studio. The downside we found was its basic approach to package management - no resolving system, setting package versions in a text config file per shot (which will quickly become unmanageable). The support for managing software is rudimentary. We were also interested in how we could hook into Toolkit to run our own rez-based system, but ended up inconclusive. We stressed to them that hooking in external software management was a must for us - we cannot end up in a situation where we have two separate software management systems running alongside one another. By all means find out more about how this might work, I'd be interested to hear.
 

Windows

I started to modify some of the Rez code to get it to run under Windows using the Git Bash MSYS shell.  So far I think I have it executing correctly (There will be some more areas that I'll have to mess with due to the way the different shells (MSYS, Cygwin) deal with paths and especially the PATH environment variable).  Once I am more familiar about how I should go about configuring and getting Rez setup, I'll continue testing these modifications and eventually create pull requests once I'm sure I have things stable.


 Hopefully you're aware of the other Windows porting efforts going on at the moment? It would probably be good to touch base with those guys and see where your efforts might overlap. Also, hopefully you've seen that shell support in Rez is plugin-based, it may make sense for you to implement an MSYS plugin, perhaps even MSYS and Cygwin that derive from some common implementation? Just food for thought.


Again I apologize for so much detail.  Hopefully understanding some of our current systems will help determine how and where Rez will fit in.  Any advice or tips would be very appreciated.

No problem. Welcome aboard, hopefully Rez is something that helps you in your efforts. Rez is supposed to make software management easier so we can all get on with the job of writing other software!
 

Thanks for your time and for creating Rez!

Best,

--
Tony

thx,
Allan
 

allan.johns

unread,
Jun 11, 2014, 1:39:57 PM6/11/14
to rez-c...@googlegroups.com
Also, in case you get confused as to my own and Chad's differing answers to where Rez fits into bootstrapping - this just shows that you're pretty flexible in what you're able to do. For example, Chad showed how to use a package to define a 'coke' project, whereas I described how you might use 'suites' to achieve this. Both are valid, they are just different takes on it. They have different properties though - for example, when you consume Chad's 'coke' project (eg: "rez-env coke"), the resolve happens there and then. That means that you might get newer versions of some packages - for example, if your coke project had a dependency on "foo-1.0", and a new version "foo-1.0.3" was released, you would get this new version when you "rez-env coke" (in rez' versioning system, "1.0" is the superset of all versions "1.0(.X.X...X)"). In a suite however, the resolve has already happened, and is baked out. To receive newer packages, you would generate a new suite. Neither approach is more correct, it just depends on how you want to use it.

Also, Chad mentioned that you could write a python-based bootstrapping system that uses Rez's python API to perhaps read shot config info from a database, construct the relevant target environment (its "request" anyway) and then use Rez to resolve the environment. This is also correct - it's up to you how tightly you might want to integrate Rez into your bootstrapping process. Chad mentioned that you would call "rez-env" to open the configured shell. You wouldn't even need to do that - you can create and spawn a resolved, interactive shell from the API, as well as run a command within that resolved environment directly from the API also.

hth
A

Tony Barbieri

unread,
Jun 11, 2014, 2:28:49 PM6/11/14
to rez-c...@googlegroups.com
I really appreciate you both taking the time to write these responses up!

I believe I better understand what role Rez takes and where it fits in the greater scheme of things.

Hopefully you're aware of the other Windows porting efforts going on at the moment? It would probably be good to touch base with those guys and see where your efforts might overlap. Also, hopefully you've seen that shell support in Rez is plugin-based, it may make sense for you to implement an MSYS plugin, perhaps even MSYS and Cygwin that derive from some common implementation? Just food for thought.

I've spoken with Marcus briefly about this.  I just started messing with Rez yesterday and wanted to get a handle on what issues there are running it on Windows under a bash shell.  My experience so far with using MSYS and Cygwin is that things mostly work as they would in a proper bash shell on Linux and OSX but there are a few quirks to work around.  I'll take a look at creating a rez-plugin once I have a better understanding of the general code-flow.

I may have more questions once I fully digest everything you both have written, but this looks like more than enough to get me started.  Thanks again for so much information!

Best,
 


--
You received this message because you are subscribed to the Google Groups "rez-config" group.
To unsubscribe from this group and stop receiving emails from it, send an email to rez-config+...@googlegroups.com.
To post to this group, send email to rez-c...@googlegroups.com.
Visit this group at http://groups.google.com/group/rez-config.
For more options, visit https://groups.google.com/d/optout.



--
Tony
Reply all
Reply to author
Forward
0 new messages