Welcome!

IBM Cloud Authors: Pat Romanski, Zakia Bouachraoui, Elizabeth White, Liz McMillan, Yeshim Deniz

Related Topics: IBM Cloud

IBM Cloud: Article

Build and Deploy Procedures

From Chapter 4

This chapter, "Build and Deploy Procedures" is excerpted from the new book, IBM WebSphere: Deployment and Advanced Configuration, authored by Roland Barcia, Bill Hines, Tom Alcott and Keys Botzum. © International Business Machines Corporation 2005. ISBN 0-13-146862-6. To learn more, www.phptr.com/title/0131468626.

In the last chapter, we provided you with a quick start to WebSphere Application Server (WAS) by configuring and deploying the Personal Trade System. The deployment was simple and straightforward, using the WAS administrative console to do most of the work. With small applications, this approach is often sufficient. However, for most J2EE applications, deployment requires much more planning and process. In fact, the entire application lifecycle requires rigorous processes.

In this chapter, we will focus on three closely related areas: builds, deployment, and server configuration. We will start by discussing some key reasons why rigor is important and then define some terminology that we'll be using for the rest of the chapter. Then we'll identify three common approaches to build and deploy. Following that, we will focus on automation.

Automation of the build and deploy steps can reduce the amount of effort required to produce and then deploy an application. Automation can also reduce the likelihood of human error-not an insignificant concern in large and complex projects. In addition to automating the assembly and building of applications, we will offer advice on how to automate server configuration.

Procedures Matter

The enterprise development model is more complex than traditional small application development-code must be written, compiled, assembled, and then deployed (presumably with some testing included). That is, after the code is compiled, the application must be assembled from a set of modules into a single application EAR. In addition, deployment includes the implicit concept of "binding," which is taking a built application and binding its contents to the target runtime environment. These two concepts are very powerful and give applications a great deal of flexibility, but they do require additional discipline. Build and deploy in these situations are core functions that must be carefully managed like anything else.

Setting up high-quality procedures correctly can set the tone for the rest of the project.

In our experience, failing to develop solid procedures often will cause projects to experience serious problems and occasionally will result in project failure. In this book, we are assuming a rigorous approach to development, roughly like this:

  • Developers write code following high-quality software engineering principles. This includes a configuration management system for their source code and other artifacts.
  • Developers execute unit tests against their code before checking it into the repository.
  • Code is built into a complete system following a well-defined procedure.
  • Code is deployed to a runtime environment (an actual WAS instance) following some well-defined procedure.
  • Deployments are verified using a well-defined set of component test cases. Component tests are essentially a subset of development unit tests and are focused on cross-component function and environment sanity.
  • The target application server environment is configured using well-defined procedures.
  • Multiple runtime environments are available for different purposes (development integration, system test, performance, etc.) as defined in Chapter 16, "Ideal Development and Testing Environments."

If you are not following procedures like these, your projects will suffer problems, quite possibly serious problems. Unfortunately, due to space constraints, we can't discuss every aspect of this basic approach. Instead, we are concerned here with the build, deployment, and server configuration steps. Notice that the preceding list makes no mention of automation. This is not an accident. Rigorous procedures can be developed without automation and will work fine for projects of a certain size. But for various reasons already discussed, automation can greatly benefit the basic approaches. Many parts of this book focus on using automation.

In this chapter, we will focus on moving an application out of the desktop development environment and into development integration. Because the deployment life cycle starts from the development environment, creating a rigorous process for moving a build out of development can set the tone for a successful deployment. Rigorous process in development can be a model for rigorous deployment in other environments, such as system tests or performance testing. We will focus on these other environments in Chapter 16.

Development and Build Terminology


Before continuing with the remainder of this chapter, we need to define some common terms that we are going to be using. Some of this terminology may be familiar. In order to be consistent with terms, we define several here:

1.  Runtime or runtime environment: A place where code can be executed. Basically, this is a system or set of systems that includes WAS and other needed components where application code is deployed for various purposes.

2.  IDE: An Integrated Development Environment used by an application developer to write, compile, and unit test code. WebSphere Studio is an example of an IDE.

3.  Unit Test Environment (UTE): A runtime where developers run their unit tests. This runtime is often integrated with the IDE to offer developers quick testing and debugging. The WAS unit test environment inside WebSphere Studio is an example of a UTE. If developers do not have access to an IDE with a UTE, it is not uncommon for them to have a WAS install on their desktop for unit testing.

4.  Verification runtime: A runtime needed to verify a successful build. This runtime is usually an application server and usually is located on the same machine where an official build is done. Depending on where the build is done, this runtime can be a UTE inside of an IDE, like WebSphere Studio, or a standalone WAS install on a dedicated build server.

5.  Code repository: A place where developers store versions of code and code history to share with others. Examples of code repositories are Rational Clearcase and the Concurrent Versioning System (CVS). We will use CVS throughout this book because it is readily available. Appendix C, "Setup Instructions for Samples," has information about where you can obtain CVS.

6.  Golden Workspace: A special term we use to represent a desktop snapshot of a deployable version of the code. This is essentially a version of the system loaded into a single IDE instance. This instance represents a final version before an official build.

7.  Build server: A centralized server needed to run automated builds. A centralized build can be used to coordinate projects. Specialized build scheduling and publishing tools, such as Cruise Control or AntHill,1 can be installed on a build server to add an extra level of automation.

8.  Binary repository: A place to store packaged builds that are ready for deployment. Builds must be stored and versioned just like code.

Build and Deployment Models

Now that we have outlined some key concepts, we are going to define three build and deployment models at increasing levels of sophistication. The three models are Assemble Connection Model, Assemble Export Model, and Assemble Server Model. We'll return to the discussion of server configuration later.

ASSEMBLE CONNECTION MODEL
The simplest possible method of build and deploy is the Assemble Connection Model. Figure 4-1 illustrates this model.

In this model, build, assembly, and verification steps are all performed from a desktop using either an IDE or an assembly tool. For example, WebSphere Studio contains tools for deploying both server configurations and applications directly from a WebSphere Studio workspace into a runtime environment.

When using this model, it is essential to use a Golden Workspace as part of the official build and deploy process. That is, instead of deploying directly from an uncontrolled developer's desktop, use a dedicated desktop (or at least a dedicated workspace) for creating and exporting to the application server.

This model is simple for small development environments and it is easy to implement. However, it has major drawbacks:

  • The development team makes use of deployment shortcuts available to developers. For example, WebSphere Studio has a remote server deployment feature that enables an application to be deployed directly from the development workbench. No EAR is ever produced, and thus the deployment details are hidden from developers. Despite its simplicity, this deployment process is less portable because J2EE defines no way of deploying applications that are not in an EAR or one of the submodules.
  • Because no EAR file is produced, there is no versioning of any EAR file. If an environment needs to rollback to a prior version, the build must be reproduced from the source. This can lead to unanticipated errors reproducing the correct build if the source repository tool is not sophisticated or if assemblers do not make use of code repositories correctly.
  • It takes great discipline for developers to make this work. It is very difficult to consistently reproduce a build. Deploying directly from the IDE or application source places too many dependencies on idiosyncrasies.
  • The process used in development and testing does not usually reflect the process used in production. This means the deployment process for production will mostly likely suffer from a lack of proper testing.
  • This process does not scale well. As the artifacts and number of developers grow, the assembler and deployer become a bottleneck. Many artifacts and developers are feeding input to a human that is executing a manual process.

We strongly oppose this approach. We include it here only as a point of contrast to the next two approaches, which are more reasonable.

ASSEMBLE EXPORT MODEL
The key problem with the previous model is that the deployment step that is used during development does not in any way reflect what is typically done in production. That's unfortunate because this delays the finding of inevitable problems. In the Assemble Export Model, we recognize this weakness and move to a more robust model. Here, the development tools are used to produce the deployable package (either from a Golden Workspace or from deployment scripts). The deployable package is then delivered to the runtime environment and deployed using a well-defined procedure. Figure 4-2 illustrates this model.

This results in a process that is rigorous but potentially time-consuming, depending on the complexity of the application and deployment environment. It is also subject to human error. For systems of reasonable size, this approach will work. With large environments, however, this is often insufficient.

Nonetheless, this is a clear and defined way of producing an EAR file every time. The application is deployed to a runtime similar to how it will be done in production. Therefore, certain errors can be found in this model that could not be found in the previous model. This approach has some nice advantages. Unfortunately, this model still suffers from some drawbacks:

  • There is still a scalability issue because someone is needed to create the EAR file from within the IDE (admittedly, this is only a few mouse clicks) and then execute the deployment using the WAS admin tools. In addition, the application assembler has to verify that changes made by various developers work together. As the number of developers increases, this task can become more difficult for one individual.
  • The build process is dependent on the development team. In some organizations, this is an issue in itself.

This model has been used successfully in numerous organizations. Furthermore, if the project grows to a larger size, it can be upgraded to the next model.

ASSEMBLE SERVER MODEL
The Assemble Server Model expands on the previous model by externalizing the build process. That is, instead of performing builds from within an IDE or from a developer's workstation, builds are done in a separate environment. Obviously, externalizing builds from the IDE requires automation of the build step.

The need for this model is dependent on some key factors:

  • The size and complexity of the project: Large development efforts often require an externalized build environment to coordinate many pieces.
  • Build and deploy frequency: Projects with demanding schedules often require frequent building and testing in order to meet demanding schedules. There needs to be a way to manage and automate many of the tasks to allow this to happen. By moving builds to dedicated machines, additional opportunities for automation are created.
  • Multiple development teams: Large development efforts often involve multiple development teams, sometimes under different management. In order to manage the different teams, it is best to have a designated environment for builds and one team responsible for it.

Figure 4-3 illustrates the Assemble Server Model. This model solves many of the drawbacks in the other models. However, there are downsides:

  • The initial planning and creation of scripts takes significant amounts of time. The scripts need to be tested and verified as much as the application itself.
  • The build environment needs to be built and tested. Building environments sometimes takes extra effort that people don't always want to invest. In addition, you must consider that resources may not be available. Having an extra build machine may not be feasible.

Nonetheless, if this approach is combined with automation, there are long-term benefits to using this model, and it will end up saving time for large or complex projects. A server-side automated build is highly scalable because it does not generally depend on the number of artifacts or developers. Of course, substantial effort will need to be invested in the creation of automation scripts. In the next section, we'll talk more about automation.

More Stories By Roland Barcia

Roland Barcia is a consulting IT specialist for IBM Software Services for WebSphere in the New York/New Jersey Metro area. He is the author of one of the most popular article series on the developerWorks WebSphere site, www-106.ibm.com/developerworks/websphere/techjournal/0401_barcia/barcia.html, and is also a coauthor of IBM WebSphere: Deployment and Advanced Configuration. You can find more information about him at http://web.njit.edu/~rb54

Comments (0)

Share your thoughts on this story.

Add your comment
You must be signed in to add a comment. Sign-in | Register

In accordance with our Comment Policy, we encourage comments that are on topic, relevant and to-the-point. We will remove comments that include profanity, personal attacks, racial slurs, threats of violence, or other inappropriate material that violates our Terms and Conditions, and will block users who make repeated violations. We ask all readers to expect diversity of opinion and to treat one another with dignity and respect.


IoT & Smart Cities Stories
At CloudEXPO Silicon Valley, June 24-26, 2019, Digital Transformation (DX) is a major focus with expanded DevOpsSUMMIT and FinTechEXPO programs within the DXWorldEXPO agenda. Successful transformation requires a laser focus on being data-driven and on using all the tools available that enable transformation if they plan to survive over the long term. A total of 88% of Fortune 500 companies from a generation ago are now out of business. Only 12% still survive. Similar percentages are found throug...
AI and machine learning disruption for Enterprises started happening in the areas such as IT operations management (ITOPs) and Cloud management and SaaS apps. In 2019 CIOs will see disruptive solutions for Cloud & Devops, AI/ML driven IT Ops and Cloud Ops. Customers want AI-driven multi-cloud operations for monitoring, detection, prevention of disruptions. Disruptions cause revenue loss, unhappy users, impacts brand reputation etc.
Atmosera delivers modern cloud services that maximize the advantages of cloud-based infrastructures. Offering private, hybrid, and public cloud solutions, Atmosera works closely with customers to engineer, deploy, and operate cloud architectures with advanced services that deliver strategic business outcomes. Atmosera's expertise simplifies the process of cloud transformation and our 20+ years of experience managing complex IT environments provides our customers with the confidence and trust tha...
At CloudEXPO Silicon Valley, June 24-26, 2019, Digital Transformation (DX) is a major focus with expanded DevOpsSUMMIT and FinTechEXPO programs within the DXWorldEXPO agenda. Successful transformation requires a laser focus on being data-driven and on using all the tools available that enable transformation if they plan to survive over the long term. A total of 88% of Fortune 500 companies from a generation ago are now out of business. Only 12% still survive. Similar percentages are found throug...
In his general session at 19th Cloud Expo, Manish Dixit, VP of Product and Engineering at Dice, discussed how Dice leverages data insights and tools to help both tech professionals and recruiters better understand how skills relate to each other and which skills are in high demand using interactive visualizations and salary indicator tools to maximize earning potential. Manish Dixit is VP of Product and Engineering at Dice. As the leader of the Product, Engineering and Data Sciences team at D...
As you know, enterprise IT conversation over the past year have often centered upon the open-source Kubernetes container orchestration system. In fact, Kubernetes has emerged as the key technology -- and even primary platform -- of cloud migrations for a wide variety of organizations. Kubernetes is critical to forward-looking enterprises that continue to push their IT infrastructures toward maximum functionality, scalability, and flexibility. As they do so, IT professionals are also embr...
The Japan External Trade Organization (JETRO) is a non-profit organization that provides business support services to companies expanding to Japan. With the support of JETRO's dedicated staff, clients can incorporate their business; receive visa, immigration, and HR support; find dedicated office space; identify local government subsidies; get tailored market studies; and more.
As you know, enterprise IT conversation over the past year have often centered upon the open-source Kubernetes container orchestration system. In fact, Kubernetes has emerged as the key technology -- and even primary platform -- of cloud migrations for a wide variety of organizations. Kubernetes is critical to forward-looking enterprises that continue to push their IT infrastructures toward maximum functionality, scalability, and flexibility.
Today's workforce is trading their cubicles and corporate desktops in favor of an any-location, any-device work style. And as digital natives make up more and more of the modern workforce, the appetite for user-friendly, cloud-based services grows. The center of work is shifting to the user and to the cloud. But managing a proliferation of SaaS, web, and mobile apps running on any number of clouds and devices is unwieldy and increases security risks. Steve Wilson, Citrix Vice President of Cloud,...
When Enterprises started adopting Hadoop-based Big Data environments over the last ten years, they were mainly on-premise deployments. Organizations would spin up and manage large Hadoop clusters, where they would funnel exabytes or petabytes of unstructured data.However, over the last few years the economics of maintaining this enormous infrastructure compared with the elastic scalability of viable cloud options has changed this equation. The growth of cloud storage, cloud-managed big data e...