I'm new to Django. I need to setup Git to deploy a Django website to
the production server. My question here is to know what is the best
way of doing this.
By now I only have a Master branch. My problem here is that
Development environment is not equal to the Production environment.
How can I have the two environments(Development and Production) in
Git? Should I use two new Branches(Development and Production). Please
give me a clue on this.
Other question... when I finish to upload/push the code to the
Production server I need to restart the Gunicorn(serves Django
website). How can I do this?
And the most important question... Should I use Git to do this or I
have better options?
Best Regards,
To actually answer your question more completely, we would need to know
more on your development and production environment, in other words,
what tools are you using.
In my setup, git contains only the project code, no distinction here
to development or production environment.
I use virtualenv and pip to setup my environment. Installing on production
can be as simple as doing a "pip freeze > requirements.txt" to get all
the installed packages and installing them on the server is
possible via "pip install -r requirments.txt".
I manage gunicorn with supervisord. It's a great tool.
If you don't have supervisord or another tool to manage the gunicorn instances,
you'll need to stop the gunicorn processes manually (kill if working on Linuix)
and start them again. (gunicorn_django -c <configfile>)
You really should be looking at a tool to manage them.
Fabric is a tool that can help you automate some of these tedious tasks.
In short:
- If not done yet, have a look at virtualenv and pip and use them to setup
your environment on your development and production machine. On the production
machine you can setup the exact same environment by using the requirements
from the pip freeze command as explained above.
- Develop, commit to git (only code)
- On the server, go into the virtualenv, pull in the code
- Restart the gunicorn processes preferably via supervisord or another tool
- Have a look at Fabric to automate these steps
Regards,
Benedict
have two branches (and do not put settings.py under version control)
>
> Other question... when I finish to upload/push the code to the
> Production server I need to restart the Gunicorn(serves Django
> website). How can I do this?
use supervisord
>
> And the most important question... Should I use Git to do this or I
> have better options?
I personally prefer mercurial - bitbucket rocks.
--
regards
Kenneth Gonsalves
I'm using Nginx + Gunicorn + Supervisor + Virtualenv
My goal is to deploy the code to the Production in a One Click Step. I
think I will read on Fabric to achieve this.
More read also about Pip, I don't know how Pip Freeze works.
If you have some more clues, you are welcome.
Best Regards,
> --
> You received this message because you are subscribed to the Google Groups "Django users" group.
> To post to this group, send email to django...@googlegroups.com.
> To unsubscribe from this group, send email to django-users...@googlegroups.com.
> For more options, visit this group at http://groups.google.com/group/django-users?hl=en.
>
>
If you want 1 click, you'll need fabric, read up on that.
As for pip, pip can be used to install the various dependencies of your
project in your virtualenv.
You can then list these dependencies with this command:
pip freeze > requirements.txt
This writes them into a file named requirements.txt that you can then
use to setup the dependencies of your virtualenv on the production server.
pip install -r requirments.txt
You only do this to setup the production server, it's not needed to transfer
code, that you do with git (or mercurial, hg, ...).
Regards,
Benedict
I use git branches to manage my differences between development,
production and testing. Everything that matters at all is in git, I
can rebuild my entire site using a script which also lives in git.
Now, obviously each of these environments will be slightly different -
at the very least domain names and port numbers will differ and in my
case I also use mock back-end databases during development and real
ones in testing and production. Also, the deployment scripts
themselves (I am a dinosaur who uses ssh, rsync and makefiles) are
different on each branch.
So all three branches are different, and I don't want to risk losing
the differences when I merge. But aside from these deliberate
deviations, I want to keep the three branches as closely synchronized
as possible.
They key to making this work, is all development happens on the main
branch, and production and testing branch off dev and are updated
using the `git rebase` command. If you don't know `git rebase`, I
highly recommend learning it, it's like magic. :-) When I update my
production branch it automates these steps:
1. reverts all differences between production and the dev branch point
2. applies the development progress (moving the branch point to the head)
3. reapplies the patches that got reverted in 1.
Basically, this moves the branch point forward, bringing both branches
back in sync without losing the deliberate deviations.
Sometimes there is some manual work involved in resolving conflicts,
but that is a good thing as those are generally things which have
changed and deserve extra attention during deployment.
At any time if I want to compare what is in production with the
testing or development trees, I can just run `git diff production` or
`git diff testing` to see all the differences.
I've been working this way for a bit over a year, developing and
running the pagekite.net site and service, and I really like it.
--
Bjarni R. Einarsson
Founder, lead developer of PageKite.
Make localhost servers visible to the world: http://pagekite.net/
you put your passwords and keys under version control?
--
regards
Kenneth Gonsalves
Where else would you put them? Not every VCS is wide open to view, our
configuration VCS is highly locked down, but you need to record the
information _somewhere_ in order to do coherent SCM.
Cheers
Tom
You might find this useful:
http://python.mirocommunity.org/video/1689/pycon-2010-django-deployment-w
It's jacobian's django deployment workshop from pycon 2010, so it's
about 3h long. It's worth watching front to back, but he only spends
a little time on deployment tools. (The rest is nginx, apache,
postgre, memcached, etc.) The tl;dw is: check out fabric, puppet, and
buildout for automating these things.
Cheers,
john
--
John P. Kiffmeyer
Email/XMPP: jo...@thekiffmeyer.org
Check out https://code.djangoproject.com/wiki/SplitSettings
Personally, I use the second example, "Multiple setting files
importing from each other".
You can put settings common to both development and production in
settings.py and keep it in version control, but have a
settings_local.py or something that has things like database
connection information (including passwords), SECRET_KEY, and whatever
else would be secret or different between development/production.
Then .gitignore settings_local.py. I also keep a
settings_local.py.template in version control that's just a
fill-in-the-blank of settings_local.py for quick set up on a new dev
machine or production server.
sorry - I was thinking open source.
--
regards
Kenneth Gonsalves
I have three separate settings files for deployment, test, and
production. They don't import from each other yet, but that would be
much better (thanks John). Fabric symlinks settings.py to whichever
one is appropriate for the environment.
My fabric file has separate functions for test and production which
just set some fabric environment variables (domain, IP, etc.).
This setup has worked well thus far. In the future I'd like to have
the settings, fabfiles, nginx and init script templates all import
common settings from some central place.