3

我有兴趣找出在服务器上组织 Django 应用程序的最佳实践方式。

  • 你在哪里放置 Django 代码?(现在旧的)年历说 /home/django/domains/somesitename.com/ 但我也看到了放在 /opt/apps/somesitename/ 中的东西。我认为 /opt/ 想法听起来更好,因为它不是全球性的,但我以前没有见过 opt,并且大概将应用程序放在特定于站点的部署程序用户主目录中可能会更好。

  • 您是否建议让一个全局部署用户、每个站点一个用户或每个站点环境一个用户(例如,sitenamelive、sitenamestaging)。我在想每个站点一个。

  • 您如何对配置文件进行版本控制?我目前将它们放在源代码控制顶层的 /etc/ 文件夹中。例如,/etc/nginc/somesite-live.conf。

  • 您如何配置服务器并进行部署?多年来,我一直反对 Chef 和 Puppet,希望基于 Python 的东西。Silver Lining 似乎还没有准备好,我对 Patchwork 寄予厚望(https://github.com/fabric/patchwork/)。目前我们只是使用一些自定义 Fabric 脚本进行部署,但“服务器配置”由 bash 脚本和一些用于添加密钥和创建用户的手动步骤处理。我即将调查 Silk Deployment (https://bitbucket.org/btubbs/silk-deployment),因为它似乎最接近我们的设置。

谢谢!

4

1 回答 1

1

I think there would have to be more information on what kinds of sites you are deploying: there would be differences based on the relations between the sites, both programatically and 'legally' (as in a business relation):

  • Having an system account per 'site' can be handy if the sites are 'owned' by different people - if you are a web designer or programmer with a few clients, then you might benefit from separation.
  • If your sites are related, i.e. a forum site, a blog site etc, you might benefit from a single deployment system (like ours).
  • for libraries, if they're hosted on reputable sources (pypy, github etc), its probably ok to leave them there and deploy from them - if they're on dodgy hosts which are up or down, we take a copy and put them in a /thirdparty folder in our git repo.

FABRIC Fabric is amazing - if its setup and configured right for you:

  • We have a policy here which means nobody ever needs to log onto a server (which is mostly true - there are occasions where we want to look at the raw nginx log file, but its a rarity).
  • We've got fabric configured so that there are individual functional blocks (restart_nginx, restart_uwsgi etc), but also
  • higher level 'business' functions which run all the little blocks in the right order - for us to update all our servers we meerly type 'fab -i secretkey live deploy' - the live sets the settings for the live servers, and deploy ldeploys (the -i is optional if you have your .ssh keys set up right)
  • We even have a control flag that if the live setting is used, it will ask 'are you sure' before performing the deploy.

Our code layout

So our code base layout looks a bit like this:

/         <-- folder containing readme file etc
/bin/     <-- folder containing nginx & uwsgi binaries (!)
/config/  <-- folder containing nginx config and pip list but also things like pep8 and pylint configs 
/fabric/  <-- folder containing fabric deployment
/logs/    <-- holding folder that nginx logs get written into (but not committed)
/src/     <-- actual source is in here!
/thirdparty/ <-- third party libs that we didn't trust the hosting of for pip

Possibly controversial because we load our binaries into our repo, but it means that if i upgrade nginx on the boxes, and want to roll back, i just do it by manipulation of git. I know what works against what build.

How our deploy works:

All our source code is hosted on a private bitbucket repo (we have a lot of repos and a few users, thats why bitbucket is better for us then github). We have a user account for the 'servers' with its own ssh key for bitbucket.

Deploy in fabric performs the following on each server:

  • irc bot announce beginning into the irc channel
  • git pull
  • pip deploy (from a pip list in our repo)
  • syncdb
  • south migrate
  • uwsgi restart
  • celery restart
  • irc bot announce completion into the irc channel
  • start availability testing
  • announce results of availability testing (and post report into private pastebin)

The 'availability test' (think unit test, but against live server) - hits all the webpages and API's on the 'test' account to make sure it gets back sane data without affecting live stats.

We also have a backup git service so if bitbucket is down, it falls over to that gracefully, and we even have jenkins integration that on a commit to the 'deploy' branch, it causes the deployment to go through

The scary bit

Because we use cloud computing and expect a high throughput, our boxes auto spawn. Theres a default image which contains a a copy of the git repo etc, but invariably it will be out of date, so theres a startup script which does a deployment to itself, meaning new boxes added to the cluster are automatically up-to-date.

于 2012-06-21T14:51:59.513 回答