IBM Cloud Authors: Pat Romanski, Elizabeth White, Liz McMillan, Yeshim Deniz, William Schmarzo

Related Topics: IBM Cloud

IBM Cloud: Article

Build and Deploy Procedures

From Chapter 4

This chapter, "Build and Deploy Procedures" is excerpted from the new book, IBM WebSphere: Deployment and Advanced Configuration, authored by Roland Barcia, Bill Hines, Tom Alcott and Keys Botzum. © International Business Machines Corporation 2005. ISBN 0-13-146862-6. To learn more, www.phptr.com/title/0131468626.

In the last chapter, we provided you with a quick start to WebSphere Application Server (WAS) by configuring and deploying the Personal Trade System. The deployment was simple and straightforward, using the WAS administrative console to do most of the work. With small applications, this approach is often sufficient. However, for most J2EE applications, deployment requires much more planning and process. In fact, the entire application lifecycle requires rigorous processes.

In this chapter, we will focus on three closely related areas: builds, deployment, and server configuration. We will start by discussing some key reasons why rigor is important and then define some terminology that we'll be using for the rest of the chapter. Then we'll identify three common approaches to build and deploy. Following that, we will focus on automation.

Automation of the build and deploy steps can reduce the amount of effort required to produce and then deploy an application. Automation can also reduce the likelihood of human error-not an insignificant concern in large and complex projects. In addition to automating the assembly and building of applications, we will offer advice on how to automate server configuration.

Procedures Matter

The enterprise development model is more complex than traditional small application development-code must be written, compiled, assembled, and then deployed (presumably with some testing included). That is, after the code is compiled, the application must be assembled from a set of modules into a single application EAR. In addition, deployment includes the implicit concept of "binding," which is taking a built application and binding its contents to the target runtime environment. These two concepts are very powerful and give applications a great deal of flexibility, but they do require additional discipline. Build and deploy in these situations are core functions that must be carefully managed like anything else.

Setting up high-quality procedures correctly can set the tone for the rest of the project.

In our experience, failing to develop solid procedures often will cause projects to experience serious problems and occasionally will result in project failure. In this book, we are assuming a rigorous approach to development, roughly like this:

  • Developers write code following high-quality software engineering principles. This includes a configuration management system for their source code and other artifacts.
  • Developers execute unit tests against their code before checking it into the repository.
  • Code is built into a complete system following a well-defined procedure.
  • Code is deployed to a runtime environment (an actual WAS instance) following some well-defined procedure.
  • Deployments are verified using a well-defined set of component test cases. Component tests are essentially a subset of development unit tests and are focused on cross-component function and environment sanity.
  • The target application server environment is configured using well-defined procedures.
  • Multiple runtime environments are available for different purposes (development integration, system test, performance, etc.) as defined in Chapter 16, "Ideal Development and Testing Environments."

If you are not following procedures like these, your projects will suffer problems, quite possibly serious problems. Unfortunately, due to space constraints, we can't discuss every aspect of this basic approach. Instead, we are concerned here with the build, deployment, and server configuration steps. Notice that the preceding list makes no mention of automation. This is not an accident. Rigorous procedures can be developed without automation and will work fine for projects of a certain size. But for various reasons already discussed, automation can greatly benefit the basic approaches. Many parts of this book focus on using automation.

In this chapter, we will focus on moving an application out of the desktop development environment and into development integration. Because the deployment life cycle starts from the development environment, creating a rigorous process for moving a build out of development can set the tone for a successful deployment. Rigorous process in development can be a model for rigorous deployment in other environments, such as system tests or performance testing. We will focus on these other environments in Chapter 16.

Development and Build Terminology

Before continuing with the remainder of this chapter, we need to define some common terms that we are going to be using. Some of this terminology may be familiar. In order to be consistent with terms, we define several here:

1.  Runtime or runtime environment: A place where code can be executed. Basically, this is a system or set of systems that includes WAS and other needed components where application code is deployed for various purposes.

2.  IDE: An Integrated Development Environment used by an application developer to write, compile, and unit test code. WebSphere Studio is an example of an IDE.

3.  Unit Test Environment (UTE): A runtime where developers run their unit tests. This runtime is often integrated with the IDE to offer developers quick testing and debugging. The WAS unit test environment inside WebSphere Studio is an example of a UTE. If developers do not have access to an IDE with a UTE, it is not uncommon for them to have a WAS install on their desktop for unit testing.

4.  Verification runtime: A runtime needed to verify a successful build. This runtime is usually an application server and usually is located on the same machine where an official build is done. Depending on where the build is done, this runtime can be a UTE inside of an IDE, like WebSphere Studio, or a standalone WAS install on a dedicated build server.

5.  Code repository: A place where developers store versions of code and code history to share with others. Examples of code repositories are Rational Clearcase and the Concurrent Versioning System (CVS). We will use CVS throughout this book because it is readily available. Appendix C, "Setup Instructions for Samples," has information about where you can obtain CVS.

6.  Golden Workspace: A special term we use to represent a desktop snapshot of a deployable version of the code. This is essentially a version of the system loaded into a single IDE instance. This instance represents a final version before an official build.

7.  Build server: A centralized server needed to run automated builds. A centralized build can be used to coordinate projects. Specialized build scheduling and publishing tools, such as Cruise Control or AntHill,1 can be installed on a build server to add an extra level of automation.

8.  Binary repository: A place to store packaged builds that are ready for deployment. Builds must be stored and versioned just like code.

Build and Deployment Models

Now that we have outlined some key concepts, we are going to define three build and deployment models at increasing levels of sophistication. The three models are Assemble Connection Model, Assemble Export Model, and Assemble Server Model. We'll return to the discussion of server configuration later.

The simplest possible method of build and deploy is the Assemble Connection Model. Figure 4-1 illustrates this model.

In this model, build, assembly, and verification steps are all performed from a desktop using either an IDE or an assembly tool. For example, WebSphere Studio contains tools for deploying both server configurations and applications directly from a WebSphere Studio workspace into a runtime environment.

When using this model, it is essential to use a Golden Workspace as part of the official build and deploy process. That is, instead of deploying directly from an uncontrolled developer's desktop, use a dedicated desktop (or at least a dedicated workspace) for creating and exporting to the application server.

This model is simple for small development environments and it is easy to implement. However, it has major drawbacks:

  • The development team makes use of deployment shortcuts available to developers. For example, WebSphere Studio has a remote server deployment feature that enables an application to be deployed directly from the development workbench. No EAR is ever produced, and thus the deployment details are hidden from developers. Despite its simplicity, this deployment process is less portable because J2EE defines no way of deploying applications that are not in an EAR or one of the submodules.
  • Because no EAR file is produced, there is no versioning of any EAR file. If an environment needs to rollback to a prior version, the build must be reproduced from the source. This can lead to unanticipated errors reproducing the correct build if the source repository tool is not sophisticated or if assemblers do not make use of code repositories correctly.
  • It takes great discipline for developers to make this work. It is very difficult to consistently reproduce a build. Deploying directly from the IDE or application source places too many dependencies on idiosyncrasies.
  • The process used in development and testing does not usually reflect the process used in production. This means the deployment process for production will mostly likely suffer from a lack of proper testing.
  • This process does not scale well. As the artifacts and number of developers grow, the assembler and deployer become a bottleneck. Many artifacts and developers are feeding input to a human that is executing a manual process.

We strongly oppose this approach. We include it here only as a point of contrast to the next two approaches, which are more reasonable.

The key problem with the previous model is that the deployment step that is used during development does not in any way reflect what is typically done in production. That's unfortunate because this delays the finding of inevitable problems. In the Assemble Export Model, we recognize this weakness and move to a more robust model. Here, the development tools are used to produce the deployable package (either from a Golden Workspace or from deployment scripts). The deployable package is then delivered to the runtime environment and deployed using a well-defined procedure. Figure 4-2 illustrates this model.

This results in a process that is rigorous but potentially time-consuming, depending on the complexity of the application and deployment environment. It is also subject to human error. For systems of reasonable size, this approach will work. With large environments, however, this is often insufficient.

Nonetheless, this is a clear and defined way of producing an EAR file every time. The application is deployed to a runtime similar to how it will be done in production. Therefore, certain errors can be found in this model that could not be found in the previous model. This approach has some nice advantages. Unfortunately, this model still suffers from some drawbacks:

  • There is still a scalability issue because someone is needed to create the EAR file from within the IDE (admittedly, this is only a few mouse clicks) and then execute the deployment using the WAS admin tools. In addition, the application assembler has to verify that changes made by various developers work together. As the number of developers increases, this task can become more difficult for one individual.
  • The build process is dependent on the development team. In some organizations, this is an issue in itself.

This model has been used successfully in numerous organizations. Furthermore, if the project grows to a larger size, it can be upgraded to the next model.

The Assemble Server Model expands on the previous model by externalizing the build process. That is, instead of performing builds from within an IDE or from a developer's workstation, builds are done in a separate environment. Obviously, externalizing builds from the IDE requires automation of the build step.

The need for this model is dependent on some key factors:

  • The size and complexity of the project: Large development efforts often require an externalized build environment to coordinate many pieces.
  • Build and deploy frequency: Projects with demanding schedules often require frequent building and testing in order to meet demanding schedules. There needs to be a way to manage and automate many of the tasks to allow this to happen. By moving builds to dedicated machines, additional opportunities for automation are created.
  • Multiple development teams: Large development efforts often involve multiple development teams, sometimes under different management. In order to manage the different teams, it is best to have a designated environment for builds and one team responsible for it.

Figure 4-3 illustrates the Assemble Server Model. This model solves many of the drawbacks in the other models. However, there are downsides:

  • The initial planning and creation of scripts takes significant amounts of time. The scripts need to be tested and verified as much as the application itself.
  • The build environment needs to be built and tested. Building environments sometimes takes extra effort that people don't always want to invest. In addition, you must consider that resources may not be available. Having an extra build machine may not be feasible.

Nonetheless, if this approach is combined with automation, there are long-term benefits to using this model, and it will end up saving time for large or complex projects. A server-side automated build is highly scalable because it does not generally depend on the number of artifacts or developers. Of course, substantial effort will need to be invested in the creation of automation scripts. In the next section, we'll talk more about automation.

More Stories By Roland Barcia

Roland Barcia is a consulting IT specialist for IBM Software Services for WebSphere in the New York/New Jersey Metro area. He is the author of one of the most popular article series on the developerWorks WebSphere site, www-106.ibm.com/developerworks/websphere/techjournal/0401_barcia/barcia.html, and is also a coauthor of IBM WebSphere: Deployment and Advanced Configuration. You can find more information about him at http://web.njit.edu/~rb54

Comments (0)

Share your thoughts on this story.

Add your comment
You must be signed in to add a comment. Sign-in | Register

In accordance with our Comment Policy, we encourage comments that are on topic, relevant and to-the-point. We will remove comments that include profanity, personal attacks, racial slurs, threats of violence, or other inappropriate material that violates our Terms and Conditions, and will block users who make repeated violations. We ask all readers to expect diversity of opinion and to treat one another with dignity and respect.

@ThingsExpo Stories
Digital Transformation and Disruption, Amazon Style - What You Can Learn. Chris Kocher is a co-founder of Grey Heron, a management and strategic marketing consulting firm. He has 25+ years in both strategic and hands-on operating experience helping executives and investors build revenues and shareholder value. He has consulted with over 130 companies on innovating with new business models, product strategies and monetization. Chris has held management positions at HP and Symantec in addition to ...
Cloud-enabled transformation has evolved from cost saving measure to business innovation strategy -- one that combines the cloud with cognitive capabilities to drive market disruption. Learn how you can achieve the insight and agility you need to gain a competitive advantage. Industry-acclaimed CTO and cloud expert, Shankar Kalyana presents. Only the most exceptional IBMers are appointed with the rare distinction of IBM Fellow, the highest technical honor in the company. Shankar has also receive...
Enterprises have taken advantage of IoT to achieve important revenue and cost advantages. What is less apparent is how incumbent enterprises operating at scale have, following success with IoT, built analytic, operations management and software development capabilities - ranging from autonomous vehicles to manageable robotics installations. They have embraced these capabilities as if they were Silicon Valley startups.
Poor data quality and analytics drive down business value. In fact, Gartner estimated that the average financial impact of poor data quality on organizations is $9.7 million per year. But bad data is much more than a cost center. By eroding trust in information, analytics and the business decisions based on these, it is a serious impediment to digital transformation.
The standardization of container runtimes and images has sparked the creation of an almost overwhelming number of new open source projects that build on and otherwise work with these specifications. Of course, there's Kubernetes, which orchestrates and manages collections of containers. It was one of the first and best-known examples of projects that make containers truly useful for production use. However, more recently, the container ecosystem has truly exploded. A service mesh like Istio addr...
Predicting the future has never been more challenging - not because of the lack of data but because of the flood of ungoverned and risk laden information. Microsoft states that 2.5 exabytes of data are created every day. Expectations and reliance on data are being pushed to the limits, as demands around hybrid options continue to grow.
Business professionals no longer wonder if they'll migrate to the cloud; it's now a matter of when. The cloud environment has proved to be a major force in transitioning to an agile business model that enables quick decisions and fast implementation that solidify customer relationships. And when the cloud is combined with the power of cognitive computing, it drives innovation and transformation that achieves astounding competitive advantage.
As IoT continues to increase momentum, so does the associated risk. Secure Device Lifecycle Management (DLM) is ranked as one of the most important technology areas of IoT. Driving this trend is the realization that secure support for IoT devices provides companies the ability to deliver high-quality, reliable, secure offerings faster, create new revenue streams, and reduce support costs, all while building a competitive advantage in their markets. In this session, we will use customer use cases...
Digital Transformation: Preparing Cloud & IoT Security for the Age of Artificial Intelligence. As automation and artificial intelligence (AI) power solution development and delivery, many businesses need to build backend cloud capabilities. Well-poised organizations, marketing smart devices with AI and BlockChain capabilities prepare to refine compliance and regulatory capabilities in 2018. Volumes of health, financial, technical and privacy data, along with tightening compliance requirements by...
The IoT Will Grow: In what might be the most obvious prediction of the decade, the IoT will continue to expand next year, with more and more devices coming online every single day. What isn’t so obvious about this prediction: where that growth will occur. The retail, healthcare, and industrial/supply chain industries will likely see the greatest growth. Forrester Research has predicted the IoT will become “the backbone” of customer value as it continues to grow. It is no surprise that retail is ...
Andrew Keys is Co-Founder of ConsenSys Enterprise. He comes to ConsenSys Enterprise with capital markets, technology and entrepreneurial experience. Previously, he worked for UBS investment bank in equities analysis. Later, he was responsible for the creation and distribution of life settlement products to hedge funds and investment banks. After, he co-founded a revenue cycle management company where he learned about Bitcoin and eventually Ethereal. Andrew's role at ConsenSys Enterprise is a mul...
The best way to leverage your Cloud Expo presence as a sponsor and exhibitor is to plan your news announcements around our events. The press covering Cloud Expo and @ThingsExpo will have access to these releases and will amplify your news announcements. More than two dozen Cloud companies either set deals at our shows or have announced their mergers and acquisitions at Cloud Expo. Product announcements during our show provide your company with the most reach through our targeted audiences.
DevOpsSummit New York 2018, colocated with CloudEXPO | DXWorldEXPO New York 2018 will be held November 11-13, 2018, in New York City. Digital Transformation (DX) is a major focus with the introduction of DXWorldEXPO within the program. Successful transformation requires a laser focus on being data-driven and on using all the tools available that enable transformation if they plan to survive over the long term. A total of 88% of Fortune 500 companies from a generation ago are now out of bus...
With 10 simultaneous tracks, keynotes, general sessions and targeted breakout classes, @CloudEXPO and DXWorldEXPO are two of the most important technology events of the year. Since its launch over eight years ago, @CloudEXPO and DXWorldEXPO have presented a rock star faculty as well as showcased hundreds of sponsors and exhibitors! In this blog post, we provide 7 tips on how, as part of our world-class faculty, you can deliver one of the most popular sessions at our events. But before reading...
DXWorldEXPO LLC announced today that "Miami Blockchain Event by FinTechEXPO" has announced that its Call for Papers is now open. The two-day event will present 20 top Blockchain experts. All speaking inquiries which covers the following information can be submitted by email to info@dxworldexpo.com. Financial enterprises in New York City, London, Singapore, and other world financial capitals are embracing a new generation of smart, automated FinTech that eliminates many cumbersome, slow, and expe...
Cloud Expo | DXWorld Expo have announced the conference tracks for Cloud Expo 2018. Cloud Expo will be held June 5-7, 2018, at the Javits Center in New York City, and November 6-8, 2018, at the Santa Clara Convention Center, Santa Clara, CA. Digital Transformation (DX) is a major focus with the introduction of DX Expo within the program. Successful transformation requires a laser focus on being data-driven and on using all the tools available that enable transformation if they plan to survive ov...
DXWordEXPO New York 2018, colocated with CloudEXPO New York 2018 will be held November 11-13, 2018, in New York City and will bring together Cloud Computing, FinTech and Blockchain, Digital Transformation, Big Data, Internet of Things, DevOps, AI, Machine Learning and WebRTC to one location.
DXWorldEXPO LLC announced today that ICOHOLDER named "Media Sponsor" of Miami Blockchain Event by FinTechEXPO. ICOHOLDER give you detailed information and help the community to invest in the trusty projects. Miami Blockchain Event by FinTechEXPO has opened its Call for Papers. The two-day event will present 20 top Blockchain experts. All speaking inquiries which covers the following information can be submitted by email to info@dxworldexpo.com. Miami Blockchain Event by FinTechEXPO also offers s...
DXWorldEXPO | CloudEXPO are the world's most influential, independent events where Cloud Computing was coined and where technology buyers and vendors meet to experience and discuss the big picture of Digital Transformation and all of the strategies, tactics, and tools they need to realize their goals. Sponsors of DXWorldEXPO | CloudEXPO benefit from unmatched branding, profile building and lead generation opportunities.
Dion Hinchcliffe is an internationally recognized digital expert, bestselling book author, frequent keynote speaker, analyst, futurist, and transformation expert based in Washington, DC. He is currently Chief Strategy Officer at the industry-leading digital strategy and online community solutions firm, 7Summits.