Welcome!

IBM Cloud Authors: Elizabeth White, William Schmarzo, Don MacVittie, Pat Romanski, Liz McMillan

Related Topics: IBM Cloud

IBM Cloud: Article

Build and Deploy Procedures

From Chapter 4

This chapter, "Build and Deploy Procedures" is excerpted from the new book, IBM WebSphere: Deployment and Advanced Configuration, authored by Roland Barcia, Bill Hines, Tom Alcott and Keys Botzum. © International Business Machines Corporation 2005. ISBN 0-13-146862-6. To learn more, www.phptr.com/title/0131468626.

In the last chapter, we provided you with a quick start to WebSphere Application Server (WAS) by configuring and deploying the Personal Trade System. The deployment was simple and straightforward, using the WAS administrative console to do most of the work. With small applications, this approach is often sufficient. However, for most J2EE applications, deployment requires much more planning and process. In fact, the entire application lifecycle requires rigorous processes.

In this chapter, we will focus on three closely related areas: builds, deployment, and server configuration. We will start by discussing some key reasons why rigor is important and then define some terminology that we'll be using for the rest of the chapter. Then we'll identify three common approaches to build and deploy. Following that, we will focus on automation.

Automation of the build and deploy steps can reduce the amount of effort required to produce and then deploy an application. Automation can also reduce the likelihood of human error-not an insignificant concern in large and complex projects. In addition to automating the assembly and building of applications, we will offer advice on how to automate server configuration.

Procedures Matter

The enterprise development model is more complex than traditional small application development-code must be written, compiled, assembled, and then deployed (presumably with some testing included). That is, after the code is compiled, the application must be assembled from a set of modules into a single application EAR. In addition, deployment includes the implicit concept of "binding," which is taking a built application and binding its contents to the target runtime environment. These two concepts are very powerful and give applications a great deal of flexibility, but they do require additional discipline. Build and deploy in these situations are core functions that must be carefully managed like anything else.

Setting up high-quality procedures correctly can set the tone for the rest of the project.

In our experience, failing to develop solid procedures often will cause projects to experience serious problems and occasionally will result in project failure. In this book, we are assuming a rigorous approach to development, roughly like this:

  • Developers write code following high-quality software engineering principles. This includes a configuration management system for their source code and other artifacts.
  • Developers execute unit tests against their code before checking it into the repository.
  • Code is built into a complete system following a well-defined procedure.
  • Code is deployed to a runtime environment (an actual WAS instance) following some well-defined procedure.
  • Deployments are verified using a well-defined set of component test cases. Component tests are essentially a subset of development unit tests and are focused on cross-component function and environment sanity.
  • The target application server environment is configured using well-defined procedures.
  • Multiple runtime environments are available for different purposes (development integration, system test, performance, etc.) as defined in Chapter 16, "Ideal Development and Testing Environments."

If you are not following procedures like these, your projects will suffer problems, quite possibly serious problems. Unfortunately, due to space constraints, we can't discuss every aspect of this basic approach. Instead, we are concerned here with the build, deployment, and server configuration steps. Notice that the preceding list makes no mention of automation. This is not an accident. Rigorous procedures can be developed without automation and will work fine for projects of a certain size. But for various reasons already discussed, automation can greatly benefit the basic approaches. Many parts of this book focus on using automation.

In this chapter, we will focus on moving an application out of the desktop development environment and into development integration. Because the deployment life cycle starts from the development environment, creating a rigorous process for moving a build out of development can set the tone for a successful deployment. Rigorous process in development can be a model for rigorous deployment in other environments, such as system tests or performance testing. We will focus on these other environments in Chapter 16.

Development and Build Terminology


Before continuing with the remainder of this chapter, we need to define some common terms that we are going to be using. Some of this terminology may be familiar. In order to be consistent with terms, we define several here:

1.  Runtime or runtime environment: A place where code can be executed. Basically, this is a system or set of systems that includes WAS and other needed components where application code is deployed for various purposes.

2.  IDE: An Integrated Development Environment used by an application developer to write, compile, and unit test code. WebSphere Studio is an example of an IDE.

3.  Unit Test Environment (UTE): A runtime where developers run their unit tests. This runtime is often integrated with the IDE to offer developers quick testing and debugging. The WAS unit test environment inside WebSphere Studio is an example of a UTE. If developers do not have access to an IDE with a UTE, it is not uncommon for them to have a WAS install on their desktop for unit testing.

4.  Verification runtime: A runtime needed to verify a successful build. This runtime is usually an application server and usually is located on the same machine where an official build is done. Depending on where the build is done, this runtime can be a UTE inside of an IDE, like WebSphere Studio, or a standalone WAS install on a dedicated build server.

5.  Code repository: A place where developers store versions of code and code history to share with others. Examples of code repositories are Rational Clearcase and the Concurrent Versioning System (CVS). We will use CVS throughout this book because it is readily available. Appendix C, "Setup Instructions for Samples," has information about where you can obtain CVS.

6.  Golden Workspace: A special term we use to represent a desktop snapshot of a deployable version of the code. This is essentially a version of the system loaded into a single IDE instance. This instance represents a final version before an official build.

7.  Build server: A centralized server needed to run automated builds. A centralized build can be used to coordinate projects. Specialized build scheduling and publishing tools, such as Cruise Control or AntHill,1 can be installed on a build server to add an extra level of automation.

8.  Binary repository: A place to store packaged builds that are ready for deployment. Builds must be stored and versioned just like code.

Build and Deployment Models

Now that we have outlined some key concepts, we are going to define three build and deployment models at increasing levels of sophistication. The three models are Assemble Connection Model, Assemble Export Model, and Assemble Server Model. We'll return to the discussion of server configuration later.

ASSEMBLE CONNECTION MODEL
The simplest possible method of build and deploy is the Assemble Connection Model. Figure 4-1 illustrates this model.

In this model, build, assembly, and verification steps are all performed from a desktop using either an IDE or an assembly tool. For example, WebSphere Studio contains tools for deploying both server configurations and applications directly from a WebSphere Studio workspace into a runtime environment.

When using this model, it is essential to use a Golden Workspace as part of the official build and deploy process. That is, instead of deploying directly from an uncontrolled developer's desktop, use a dedicated desktop (or at least a dedicated workspace) for creating and exporting to the application server.

This model is simple for small development environments and it is easy to implement. However, it has major drawbacks:

  • The development team makes use of deployment shortcuts available to developers. For example, WebSphere Studio has a remote server deployment feature that enables an application to be deployed directly from the development workbench. No EAR is ever produced, and thus the deployment details are hidden from developers. Despite its simplicity, this deployment process is less portable because J2EE defines no way of deploying applications that are not in an EAR or one of the submodules.
  • Because no EAR file is produced, there is no versioning of any EAR file. If an environment needs to rollback to a prior version, the build must be reproduced from the source. This can lead to unanticipated errors reproducing the correct build if the source repository tool is not sophisticated or if assemblers do not make use of code repositories correctly.
  • It takes great discipline for developers to make this work. It is very difficult to consistently reproduce a build. Deploying directly from the IDE or application source places too many dependencies on idiosyncrasies.
  • The process used in development and testing does not usually reflect the process used in production. This means the deployment process for production will mostly likely suffer from a lack of proper testing.
  • This process does not scale well. As the artifacts and number of developers grow, the assembler and deployer become a bottleneck. Many artifacts and developers are feeding input to a human that is executing a manual process.

We strongly oppose this approach. We include it here only as a point of contrast to the next two approaches, which are more reasonable.

ASSEMBLE EXPORT MODEL
The key problem with the previous model is that the deployment step that is used during development does not in any way reflect what is typically done in production. That's unfortunate because this delays the finding of inevitable problems. In the Assemble Export Model, we recognize this weakness and move to a more robust model. Here, the development tools are used to produce the deployable package (either from a Golden Workspace or from deployment scripts). The deployable package is then delivered to the runtime environment and deployed using a well-defined procedure. Figure 4-2 illustrates this model.

This results in a process that is rigorous but potentially time-consuming, depending on the complexity of the application and deployment environment. It is also subject to human error. For systems of reasonable size, this approach will work. With large environments, however, this is often insufficient.

Nonetheless, this is a clear and defined way of producing an EAR file every time. The application is deployed to a runtime similar to how it will be done in production. Therefore, certain errors can be found in this model that could not be found in the previous model. This approach has some nice advantages. Unfortunately, this model still suffers from some drawbacks:

  • There is still a scalability issue because someone is needed to create the EAR file from within the IDE (admittedly, this is only a few mouse clicks) and then execute the deployment using the WAS admin tools. In addition, the application assembler has to verify that changes made by various developers work together. As the number of developers increases, this task can become more difficult for one individual.
  • The build process is dependent on the development team. In some organizations, this is an issue in itself.

This model has been used successfully in numerous organizations. Furthermore, if the project grows to a larger size, it can be upgraded to the next model.

ASSEMBLE SERVER MODEL
The Assemble Server Model expands on the previous model by externalizing the build process. That is, instead of performing builds from within an IDE or from a developer's workstation, builds are done in a separate environment. Obviously, externalizing builds from the IDE requires automation of the build step.

The need for this model is dependent on some key factors:

  • The size and complexity of the project: Large development efforts often require an externalized build environment to coordinate many pieces.
  • Build and deploy frequency: Projects with demanding schedules often require frequent building and testing in order to meet demanding schedules. There needs to be a way to manage and automate many of the tasks to allow this to happen. By moving builds to dedicated machines, additional opportunities for automation are created.
  • Multiple development teams: Large development efforts often involve multiple development teams, sometimes under different management. In order to manage the different teams, it is best to have a designated environment for builds and one team responsible for it.

Figure 4-3 illustrates the Assemble Server Model. This model solves many of the drawbacks in the other models. However, there are downsides:

  • The initial planning and creation of scripts takes significant amounts of time. The scripts need to be tested and verified as much as the application itself.
  • The build environment needs to be built and tested. Building environments sometimes takes extra effort that people don't always want to invest. In addition, you must consider that resources may not be available. Having an extra build machine may not be feasible.

Nonetheless, if this approach is combined with automation, there are long-term benefits to using this model, and it will end up saving time for large or complex projects. A server-side automated build is highly scalable because it does not generally depend on the number of artifacts or developers. Of course, substantial effort will need to be invested in the creation of automation scripts. In the next section, we'll talk more about automation.

More Stories By Roland Barcia

Roland Barcia is a consulting IT specialist for IBM Software Services for WebSphere in the New York/New Jersey Metro area. He is the author of one of the most popular article series on the developerWorks WebSphere site, www-106.ibm.com/developerworks/websphere/techjournal/0401_barcia/barcia.html, and is also a coauthor of IBM WebSphere: Deployment and Advanced Configuration. You can find more information about him at http://web.njit.edu/~rb54

Comments (0)

Share your thoughts on this story.

Add your comment
You must be signed in to add a comment. Sign-in | Register

In accordance with our Comment Policy, we encourage comments that are on topic, relevant and to-the-point. We will remove comments that include profanity, personal attacks, racial slurs, threats of violence, or other inappropriate material that violates our Terms and Conditions, and will block users who make repeated violations. We ask all readers to expect diversity of opinion and to treat one another with dignity and respect.


@ThingsExpo Stories
In his session at 21st Cloud Expo, Carl J. Levine, Senior Technical Evangelist for NS1, will objectively discuss how DNS is used to solve Digital Transformation challenges in large SaaS applications, CDNs, AdTech platforms, and other demanding use cases. Carl J. Levine is the Senior Technical Evangelist for NS1. A veteran of the Internet Infrastructure space, he has over a decade of experience with startups, networking protocols and Internet infrastructure, combined with the unique ability to it...
"There's plenty of bandwidth out there but it's never in the right place. So what Cedexis does is uses data to work out the best pathways to get data from the origin to the person who wants to get it," explained Simon Jones, Evangelist and Head of Marketing at Cedexis, in this SYS-CON.tv interview at 21st Cloud Expo, held Oct 31 – Nov 2, 2017, at the Santa Clara Convention Center in Santa Clara, CA.
"Cloud Academy is an enterprise training platform for the cloud, specifically public clouds. We offer guided learning experiences on AWS, Azure, Google Cloud and all the surrounding methodologies and technologies that you need to know and your teams need to know in order to leverage the full benefits of the cloud," explained Alex Brower, VP of Marketing at Cloud Academy, in this SYS-CON.tv interview at 21st Cloud Expo, held Oct 31 – Nov 2, 2017, at the Santa Clara Convention Center in Santa Clar...
Large industrial manufacturing organizations are adopting the agile principles of cloud software companies. The industrial manufacturing development process has not scaled over time. Now that design CAD teams are geographically distributed, centralizing their work is key. With large multi-gigabyte projects, outdated tools have stifled industrial team agility, time-to-market milestones, and impacted P&L stakeholders.
Gemini is Yahoo’s native and search advertising platform. To ensure the quality of a complex distributed system that spans multiple products and components and across various desktop websites and mobile app and web experiences – both Yahoo owned and operated and third-party syndication (supply), with complex interaction with more than a billion users and numerous advertisers globally (demand) – it becomes imperative to automate a set of end-to-end tests 24x7 to detect bugs and regression. In th...
"Akvelon is a software development company and we also provide consultancy services to folks who are looking to scale or accelerate their engineering roadmaps," explained Jeremiah Mothersell, Marketing Manager at Akvelon, in this SYS-CON.tv interview at 21st Cloud Expo, held Oct 31 – Nov 2, 2017, at the Santa Clara Convention Center in Santa Clara, CA.
"MobiDev is a software development company and we do complex, custom software development for everybody from entrepreneurs to large enterprises," explained Alan Winters, U.S. Head of Business Development at MobiDev, in this SYS-CON.tv interview at 21st Cloud Expo, held Oct 31 – Nov 2, 2017, at the Santa Clara Convention Center in Santa Clara, CA.
SYS-CON Events announced today that CrowdReviews.com has been named “Media Sponsor” of SYS-CON's 22nd International Cloud Expo, which will take place on June 5–7, 2018, at the Javits Center in New York City, NY. CrowdReviews.com is a transparent online platform for determining which products and services are the best based on the opinion of the crowd. The crowd consists of Internet users that have experienced products and services first-hand and have an interest in letting other potential buye...
"IBM is really all in on blockchain. We take a look at sort of the history of blockchain ledger technologies. It started out with bitcoin, Ethereum, and IBM evaluated these particular blockchain technologies and found they were anonymous and permissionless and that many companies were looking for permissioned blockchain," stated René Bostic, Technical VP of the IBM Cloud Unit in North America, in this SYS-CON.tv interview at 21st Cloud Expo, held Oct 31 – Nov 2, 2017, at the Santa Clara Conventi...
SYS-CON Events announced today that Telecom Reseller has been named “Media Sponsor” of SYS-CON's 22nd International Cloud Expo, which will take place on June 5-7, 2018, at the Javits Center in New York, NY. Telecom Reseller reports on Unified Communications, UCaaS, BPaaS for enterprise and SMBs. They report extensively on both customer premises based solutions such as IP-PBX as well as cloud based and hosted platforms.
"Space Monkey by Vivent Smart Home is a product that is a distributed cloud-based edge storage network. Vivent Smart Home, our parent company, is a smart home provider that places a lot of hard drives across homes in North America," explained JT Olds, Director of Engineering, and Brandon Crowfeather, Product Manager, at Vivint Smart Home, in this SYS-CON.tv interview at @ThingsExpo, held Oct 31 – Nov 2, 2017, at the Santa Clara Convention Center in Santa Clara, CA.
Coca-Cola’s Google powered digital signage system lays the groundwork for a more valuable connection between Coke and its customers. Digital signs pair software with high-resolution displays so that a message can be changed instantly based on what the operator wants to communicate or sell. In their Day 3 Keynote at 21st Cloud Expo, Greg Chambers, Global Group Director, Digital Innovation, Coca-Cola, and Vidya Nagarajan, a Senior Product Manager at Google, discussed how from store operations and ...
It is of utmost importance for the future success of WebRTC to ensure that interoperability is operational between web browsers and any WebRTC-compliant client. To be guaranteed as operational and effective, interoperability must be tested extensively by establishing WebRTC data and media connections between different web browsers running on different devices and operating systems. In his session at WebRTC Summit at @ThingsExpo, Dr. Alex Gouaillard, CEO and Founder of CoSMo Software, presented ...
WebRTC is great technology to build your own communication tools. It will be even more exciting experience it with advanced devices, such as a 360 Camera, 360 microphone, and a depth sensor camera. In his session at @ThingsExpo, Masashi Ganeko, a manager at INFOCOM Corporation, introduced two experimental projects from his team and what they learned from them. "Shotoku Tamago" uses the robot audition software HARK to track speakers in 360 video of a remote party. "Virtual Teleport" uses a multip...
A strange thing is happening along the way to the Internet of Things, namely far too many devices to work with and manage. It has become clear that we'll need much higher efficiency user experiences that can allow us to more easily and scalably work with the thousands of devices that will soon be in each of our lives. Enter the conversational interface revolution, combining bots we can literally talk with, gesture to, and even direct with our thoughts, with embedded artificial intelligence, whic...
SYS-CON Events announced today that Evatronix will exhibit at SYS-CON's 21st International Cloud Expo®, which will take place on Oct 31 – Nov 2, 2017, at the Santa Clara Convention Center in Santa Clara, CA. Evatronix SA offers comprehensive solutions in the design and implementation of electronic systems, in CAD / CAM deployment, and also is a designer and manufacturer of advanced 3D scanners for professional applications.
Leading companies, from the Global Fortune 500 to the smallest companies, are adopting hybrid cloud as the path to business advantage. Hybrid cloud depends on cloud services and on-premises infrastructure working in unison. Successful implementations require new levels of data mobility, enabled by an automated and seamless flow across on-premises and cloud resources. In his general session at 21st Cloud Expo, Greg Tevis, an IBM Storage Software Technical Strategist and Customer Solution Architec...
To get the most out of their data, successful companies are not focusing on queries and data lakes, they are actively integrating analytics into their operations with a data-first application development approach. Real-time adjustments to improve revenues, reduce costs, or mitigate risk rely on applications that minimize latency on a variety of data sources. In his session at @BigDataExpo, Jack Norris, Senior Vice President, Data and Applications at MapR Technologies, reviewed best practices to ...
An increasing number of companies are creating products that combine data with analytical capabilities. Running interactive queries on Big Data requires complex architectures to store and query data effectively, typically involving data streams, an choosing efficient file format/database and multiple independent systems that are tied together through custom-engineered pipelines. In his session at @BigDataExpo at @ThingsExpo, Tomer Levi, a senior software engineer at Intel’s Advanced Analytics gr...
When talking IoT we often focus on the devices, the sensors, the hardware itself. The new smart appliances, the new smart or self-driving cars (which are amalgamations of many ‘things’). When we are looking at the world of IoT, we should take a step back, look at the big picture. What value are these devices providing? IoT is not about the devices, it’s about the data consumed and generated. The devices are tools, mechanisms, conduits. In his session at Internet of Things at Cloud Expo | DXWor...