2

Environment:

  • Around a dozen services: both WARs and JARs (run with the Tanuki wrapper)
  • Each service is being deployed to development, staging, and production: the environments use different web.xml files, Spring profiles (set on the Tanuki wrapper),...
  • We want to store the artifacts on AWS S3 so they can be easily fetched by our EC2 instances: some services run on multiple machines, AutoScaling instances automatically fetch the latest version once they boot, development artifacts should automatically be deployed once they are updated,...

Our current deployment process looks like this:

  1. Jenkins builds and tests our code
  2. Once we are satisfied, we do a release with the Artifactory plugin (only for production deployments; development and staging artifacts are based on plain Jenkins builds)
  3. We use the Promoted Builds plugin with 3 different promotion jobs for each project (development, staging, production) to build the artifact for the desired environment and then upload it to S3

This works mostly OK, however we've run into multiple annoyances:

  1. The Artifactory plugin cannot tag and updated the source code due to not being able to commit to the repository (this might be specific to CloudBees, our Jenkins provider). Never mind - we can manually do that.
  2. The Jenkins S3 plugin doesn't properly work with multiple upload targets (we are using one for each environment) - it will try to upload to all of them and duplicate the settings when you save the configuration: JENKINS-18563. Never mind - we are using a shell script for the upload, which is working fine.
  3. The promotion jobs will sometimes fail, because they can be run on a different instance than the original build - either resulting in a failed (build because the files are missing) or in an outdated build (since an old version is being used). Again, this might happen due to the specific setup CloudBees is providing. This can be fixed by running the build again and hopefully having the promotion job run on the same instance as the its build this time (which is pure luck in my understanding).

We've been advised by CloudBees to do a code checkout before running the promotion job, since the workspace is ephemeral and it's not guaranteed that it exists or that it's up to date. However, shell scripts in promotion jobs don't seem to honor environment variables as stated in another StackOverflow thread, even though they are linked below the textarea field. So svn checkout ${SVN_URL} -r ${SVN_REVISION} doesn't work.

I'm pretty sure this can be fixed by some workaround, but surely I must be doing something terribly wrong. I assume many others are deploying their applications in a similar fashion and there must be a better way - without various workarounds and annoyances.

Thanks for any pointers and reading everything to the end!

4

1 回答 1

3
  • The biggest issue you face is that you actually want to do a build as part of the promotion. The promoted builds plugin is designed to do a few small actions, primarily on the archived artifacts of your build, or else to perform tagging or other such actions.

    • The design ensures that it gets a workspace on a slave. But (and this is a general Jenkins thing not a CloudBees specific) firstly the workspace you get need not be the workspace that was used for the build you are trying to promote. It could be:

      1. An empty workspace
      2. The workspace of the most recent build of the project (which may be a newer build than you are trying to promote)
      3. The workspace of an older build of the project (when you cannot get onto the slave that has the most recent build of the project)

      Now which of these you get entirely depends on the load that your Jenkins cluster is under. When running on CloudBees, you are usually most likely to encounter either of the first two situations above.

    • The actions you place in your promotion should therefore be “self-sufficient”. So for example they will copy the archived artifacts into a specific directory and then use those copied artifacts to do their 'thing'. Or they will trigger a downstream build passing through parameters from the build that is promoted. Or they will perform some action on a remote server without using any local state (i.e. flipping the switch in a blue-green deployment)

  • In your case, you are fighting on multiple fronts. The second front you are fighting on is that you are fighting Maven.

    • Maven does not like it when a build profile being active produces a different, non-equivalent, artifact.
    • Maven can probably live with the release profile producing a version of the artifact with JavaScript minified and perhaps debug info stripped (though it would be better if it could be avoided)... but that is because the two artifacts are equivalent. A webapp with JavaScript minified should work just as well as one that is not minified (unless minification introduces bugs)
    • You use profiles to build completely different artifacts. This is bad as Maven does not store the active profile as part of the information that gets deployed to the Maven repository. Thus when you declare a dependency on one of the artifacts, you do not know which profile was active when that artifact was built. A common side-effect of this is that you end up with split-brain artifacts, where one part of a composite artifact is talking to the development DB and the other part is talking to the production DB. You can tell there is this bad code smell when developers routinely run mvn clean install -Pprod && mvn clean install -Pprod when they want to switch to building a production build because they have been burned by split-brain artifacts in the past. This is a symptom of bad design, not an issue with Maven (other than it would be nice to be able to create architecture specific builds of Maven artifacts... but they can pose their own issues)
    • The “Maven way” to handle the creation of different artifacts for different deployment environments is to use a separate module to repack the “generic” artifact. As is typical of Maven's subtle approach to handling bad architecture, you will need to add repack modules for each environment you want to target... which discourages having lots of different target environments... (i.e. if you have 10 webapps/services to deploy and three target deployment environments you will have 10 generic modules and 3x10 target specific modules... giving a 40 module build) and hopefully encourages you to find a way to have the artifact self-discover its correct configuration when deployed.
  • I suspect that the safest way to hack a solution is to rely on copying archived artifacts.

    1. During the main build, create a file that will checkout the same revision as is being built, e.g.

      echo '#!/usr/bin/env sh' > checkout.sh
      echo "svn revert . -R" >> checkout.sh
      echo "svn update --force -r ${SVN_REVISION}" >> checkout.sh
      

      Though you may want to write something a little more robust, e.g. ensuring that you are in a SVN working copy, ensuring that the working copy is using the correct remote URI, removing any files that are not needed, etc

    2. Ensure that the script you create is archived.

    3. Add at the beginning of your promotion process, you copy the archived checkout.sh to the workspace and then run that script. The you should be able to do all your subsequent promotion steps as before.

  • A better solution would be to create build jobs for each of the promotion processes and pass the SVN revision through as a build parameter (but will require more work to set up for you (as you already have the promotion processes set up)

  • The best solution is to fix your architecture so that you produce artifacts that discover their correct configuration from their target environment, and thus you can just deploy the same artifact to each required environment without having to rebuild artifacts at all.

于 2013-07-29T09:52:13.840 回答