IBM Cloud Authors: Zakia Bouachraoui, Elizabeth White, Yeshim Deniz, Pat Romanski, Liz McMillan

Related Topics: IBM Cloud

IBM Cloud: Article

Building An Application Server Data Center

Building An Application Server Data Center

Your latest assignment includes building those applications and application servers that will propel your firm's business to the next level. Where to physically locate these new servers is the question, though. So, it's up to you to help design or evaluate new or existing data center space. Not sure of how to do this? It's not an insurmountable task if you know the basic building blocks which go into every comprehensive data center. This article will steer you through the many options available, and the factors that affect a data center's design.

Data centers no longer only include the cavernous mega-centers of the past. With processing power fitting into a smaller physical footprint, and smaller companies requiring less processing equipment than the much larger enterprise-class company, data centers now range in size from huge facilities to spaces no larger than a janitor's closet. Basic data center characteristics remain constant, however. The key is to build the right data center for your application servers. With this comes the realization that serious up-front planning is required prior to deploying your application servers, whether in a new data center or in an existing one; whether large-scale or small. Again, the formula remains constant.

Your firm has decided to outsource its data center? The same questions should be asked of your outsourcing partner. How robust is their data center, really? Remember that beauty is only skin deep. The same is true for data centers. After you've peeled that onion skin back a bit, you may be pleasantly surprised, or shocked. So, be prepared for both.

Core Elements
No one consciously builds a house on a shaky foundation. The same logic should naturally follow for any data center. One would think that was the case, but you would be surprised to know how many servers are truly exposed out there. With this in mind, it's best to establish some Service Level Objectives (not to be confused with a Service Level Agreement) prior to digging into your data center design. Service Level Objectives are just that: the minimum level of service your firm has reasonably established in terms of server reliability, performance, maintenance windows, up-time, etc. Some of these concepts bridge all three elements of application-server data center design. I call them core elements. They are: physical characteristics, logical characteristics, and criteria of operations.

Physical Characteristics
We'll start with the most basic element: physical characteristics. Components include space, power, environmental conditions, physical security, fire suppression, lighting, and workspace.

This is a basic concept, but one that can sometimes be underestimated on the importance scale. When planning the application server data center, many factors play into the space question. For instance, are your servers rack-mountable, shelf-mountable, or freestanding? Is the only space available near any windows? How is conditioned air pumped in? Does it flow down from the ceiling, up from a raised floor, or does it flow through the open air? What is the planned density of servers per square foot of floor space? How much aisle space will be allowed for physical connectivity, maintenance activities, and daily care and feeding of these servers? How about access into the data center? Not only from the human side (you wouldn't want to use space you can only access through the men's room), but from the Telco side (how far away is Telco's D-Mark?). Likewise, locating your servers under roof drains, condensate drains, or any other drain is bad enough, but locating them under any pressurized water lines is outrageous. Raised floor systems are nice, but not absolutely necessary. Where raised floors are unavailable, overhead cable-trays or ladder racks will do just fine. As you can see, these questions are all critical and require real thought.

Easy, right? A few receptacles here and there, some power strips or surge suppressors, and you're in business. Well, not exactly. Power, conditioned power that is, has been the savior of many a server. However, incorrectly sized or improperly grounded power systems can quickly become a nightmare. This ranges from things as common as improperly sized circuit breakers to under-powered sub-panels. There are many options available to provide conditioned power, including such things as motor generators and isolating transformers.

Tagged to the notion of conditioned power is uninterruptible power. Uninterruptible power systems are just that: sources of "virtually" uninterruptible power. The "virtual" piece is somewhat subjective. In order for power to truly be uninterruptible, it must be supplied via a series of fault-tolerant and redundant subsystems, and be backed by commercial and private power sources, which include batteries and generators. The generators also need fuel to run continuously, so this is also a consideration when designing the logistical portion of an uninterrubtible power system. These systems are available in a variety of sizes, from small standalone systems to huge multi-tiered installations. Sizing the unit is a basic prerequisite. The load of a large server-farm will crush an undersized UPS. This holds true for any power solution you may be planning, including uninterrupted power backed by generators. If the server load exceeds the kW output of the generator, it will stall under that load once power is phased in.

I've seen it happen. Believe me, it's not a pretty sight. An in-depth discussion regarding uninterruptible power is especially timely if you are doing anything pertaining to a data center at this time. I say this to remind you of the current energy crisis in the U.S., and the rolling blackout conditions which occur almost daily in several parts of the country. If your firm has already located, or is planning to locate, any mission-critical applications in these affected areas, now is a good time to truly evaluate the power situation in your existing, future, or outsourced data center.

Grounding is another critical factor in the power equation. Improper grounding of main and branch circuits can create ground loops, inviting potentially disastrous situations to all grades of computing equipment. I can't stress how important proper grounding is, especially if your data center is located in a geographic area prone to lightning strikes.

Environmental Conditions
Servers generate heat. Several colocated servers generate lots of heat. Not a problem, you say? You've got these servers isolated and in their own big room with plenty of AC ducts overhead. Problem solved! Well, maybe not. Is the conditioned air supplied to the data center part of the office air conditioning? If it is, then chances are that that same system will supply cool air in warm months and warm air in cooler months. The last thing you want is a heated data center in the winter. So, this piece requires some thought. Don't forget about windows. Windows to the outside world (especially ones with a southern exposure) can potentially bring in lots of heat on sunny days, so plan on tinted windows, blinds, or shades.

Large-scale data centers designed for that purpose typically come equipped with large air conditioning units installed within the data center itself. These systems provide conditioned air, properly cooled and humidity controlled. Too little humidity promotes static electricity; too much promotes sweat on or near heat-producing equipment. Smaller, standalone AC systems are available for small-scale data centers.

The last piece of the AC puzzle pertains to how the conditioned air is distributed. Properly directing the cool air is critical to its effect on application server farms. Physical placement of the servers feeds into this as well. For instance, the density of installed servers per square-foot of floor space determines the combined BTU output. If there is no available space on the side or back of the servers, airflow into the units is severely limited. Likewise, if the servers are positioned front-to-back, you are almost guaranteeing that having many servers in a high-density server installation will result in overheating. Back-to-back server installations are smarter, providing that you take some precautions, which we'll discuss later.

Physical Security
Think this isn't important? Think again. Do you really want your application lifeline hidden under John the Programmer's desk? No. So, what do you do? This is one of the reasons you want a data center in the first place - controlling access to the boxes and protecting them from physical harm.

Those windows discussed in the previous section may also pose a unique security concern. It's easier to break through glass than brick and mortar. So once again, planning is key.

Physical access to an office data center can be controlled, quite literally, via lock and key. When more sophistication is needed, a card scanner may be employed to track who is in the data center and when. A log entry describing the purpose of a visit is also a good idea. As the sensitivity of the nature of the application servers intensifies, so does the security needed to protect them. In some cases, high-level security measures, such as the use of security guards, hand or retina scanners, and mantraps, are warranted. This rule holds true for data centers of all sizes and configurations. Security measures are definitely something to think about.

Fire Suppression
Often overlooked and sometimes misunderstood, fire suppression is a key component to any data center. Depending on how you view the importance of your application servers, several options are available for fire suppression systems. The simplest system is the basic sprinkler head placed above the protected servers. Effective? Most definitely, except that 10,000 gallons of water dumped on those servers doesn't lend itself to server survivability. Let's face it, sprinklers are meant to protect the building from fire, not to protect the contents of your data center. So, how about FM200? This gas is the successor to Halon, and is environmentally friendly. Unlike a CO2 system, which removes the oxygen from within the data center, FM200 suppresses fire by discharging as a gas onto the surface of the combusting materials. Large amounts of heat energy are absorbed from the surface of the burning material, lowering its temperature below the ignition point. Not deadly to the human occupants of the data center, the FM200 system can deploy and, except for being totally scared when the system is activated, the workers can exit the data center calmly and without worrying about injuries.

One other thing... . Remember the conditioned air? Be sure to have the AC units in the data center automatically shut down whenever the FM200 is released or else it will be sucked out with the warm air. Don't forget to seal the data center. FM200 is only effective if it stays at a certain density within the air, so it can't leak out. Also, don't plan to put any servers or equipment directly under the FM200 dispersion-head. The force is great enough to severely damage anything directly below it.

Be aware of how conditioned air is distributed and returned. Air plenums have specific fire code restrictions on the types of cables that pass through them. Raised floor systems and dropped ceilings are typically utilized as supply or return plenums. If you plan to run your communication cables through these spaces, be sure to use materials rated for those specific environments.

Last but not least, your fire insurance underwriter may insist on installing a sprinkler system inside the data center anyway. Just be sure you can install high-temperature heads in the system. Doing so may save you some headaches later.

The proper amount of lighting is a key factor in any working environment. Ask any commercial construction company or competent electrician how easy it is to get a building or electrical permit for a commercial endeavor without a reflected ceiling or lighting plan. Beyond code requirements, it's a good idea. How are you supposed to be productive if you can't see what you're working on? Simple but true. So, don't skimp on the number of fixtures. Mood lighting may work well on a date, but has no place in a data center environment.

Work Space
Again, a relatively minor issue that is continually overlooked. Let's face it, there will be the occasional all-nighter spent inside the data center during the course of application server support. So, where do you plan on working? Think about it. The PC balancing act gets kind of old at 2 a.m. Is this any way to properly support your server farm? Work space is a reasonable and justified component of any data center, especially if you are planning to document what you were doing for those five hours. We're not talking about granite work surfaces and mahogany desks here. A small desk or table with a small file cabinet would be nice. A telephone is also essential, especially if you ever need to reach tech support. A chair to sit in would be helpful, as would a shelf for technical documentation and manuals. Forget the coffee maker, though. Liquids really don't belong alongside those servers.

Logical Characteristics
The second core element pertains to the logical characteristics of the data center. Logical characteristics play into the specifics surrounding what you are actually installing. This is where the physical characteristics of the data center are essential to properly supporting the logical characteristics. For instance, the square feet of data center floor space in relation to server densities, power requirements, BTU output, etc.; remember how we touched on all of these? Let's see how this is all intertwined.

There are some basics to think about. Start with those servers. How large are they? Where do you plan to install them; on furniture, in cabinets or racks, as standalones? What about network connectivity? Where is the Telco D-Mark? Is it a switched network, routed network, or hybrid? How about network security? Are multiple DMZs warranted? If your Service Level Objectives call for the high availability of your servers, is the network bulletproof? You can't access the servers without a network. How about server clustering? What kind of clustering? NT? Veritas? Sun? How about dual-NICs? Multi-pathing? Storage-Area-Networks? All of these concepts are important, but just how important? Let's dig a bit.

Server Density
First, you must decide on server densities. Small-scale server farms are no problem. Large-scale farms are a different story. For instance, you could comfortably fit six Compaq 6400R servers in a standard rack. Let's say you need to install 60 servers. Ten racks should do it. But what about power? If your Service Level Objective is high, you may want to consider dual power supplies in those servers. If you put an amp-probe across the lines of an active server, you'll notice that the total load is "X", with both power supplies pulling equal amounts of power. But, when one power supply fails, the remaining supply now pulls the amperage equal to the entire load ("X"). What does this mean in terms of design? It means that you must be very careful to balance those power supplies across multiple electrical circuits, and to ensure that each individual circuit can carry the whole server load by itself. It may even be advisable to feed these circuits from separate sub-panels or separate UPS systems, if possible. Again, planning is key, and all roads lead back to the Service Level Objectives you have set as goals for your data center and the application servers it houses.

Servers and Network Equipment
How will they be connected, and how will those physical connections be properly managed? Well, that depends upon server densities and those pesky Service Level Objectives. Whether the densities are large or small, interconnectivity should be accomplished via some manner of BICCSI-approved structured cable plan. Correctly specified patching panels and/or cable schemes are key to high data throughput. However, installations with high server densities bring their own set of problems; namely, effectively managing those server-to-network cross connections. For instance, a 60-server installation with dual NICs equates to 120 ports of server patching. You should place a patch panel at the top of each server rack or cabinet. These panels tie to corresponding "server" panels in a central network equipment rack (or row), or to distributed network equipment racks for installations with even higher densities.

The network gear should be connected to its own set of "network" patching panels. This methodology allows for clean and manageable network and server demarcations within a single rack or cabinet. Considering the densities of servers and network ports, you could wind up with 240 total ports being patched. This is the maximum number of ports I recommend patching in a single cabinet or rack. More than this can quickly become your worst nightmare, even for the most experienced cable installer. Color-coding of primary and secondary NIC connections is also a good idea, as it lends itself to the fast visual identification of different network-to-server connections.

BTU Output
This includes all active gear (application servers, network devices, UPS [if installed in close proximity], etc.) A single row of 60 servers is fairly easy to cool, providing your HVAC unit can handle the BTUs. Two rows of servers is a bit more challenging. Let's assume that you've followed OSHA and NFPA (fire code) regulations for clear aisle space and access surrounding this equipment. So, how do you place all those servers in those racks if space is limited, but legal? You could have the rows face front to back, but that would mean that the heated exhaust air from the servers in Row A would hit the faces of the servers in Row B. Not too smart. No problem, though. Have them facing back-to-back. Much smarter! Okay, now for the challenge. Where do you direct the conditioned air? To the rear of each server, right? After all, that's where all the heat is coming from. Wrong! Direct the air at the face of each server. The cool air gets drawn into the unit and cools the internal components. The resultant exhaust air is also much cooler. So, whether the air comes up from the raised floor, or down from ducts above, strategically place these to direct the air at the front of the servers. This holds true for anything your server is mounted or placed in, except for enclosed cabinets. In those instances, have the conditioned air pumped in from directly above or below the cabinet. Be sure to have those cabinets equipped with fans to draw the conditioned air through and out.

How about connectivity to the outside world? If your servers face, or are tied to anything Web-facing, it is in your best interest to consider more than one way to the Internet. This concept bridges both physical and logical mapping. If your data center is in-house, your options may be somewhat limited, unless you happen to have SONET equipment or are fed via alternate serving wire centers. If you have outsourced your data center, be sure to closely examine your provider's peering arrangement with other carriers. It's really important.

Backup and Restoral
Other things to think about include backup and restoral. This particular requirement bridges physical, logical, and operational criteria. So, what about backup and restoral? Is it okay to store all backups on site, or must you transport tapes to a secure facility? Will the RAID arrays in your servers do the trick by themselves? Will you have someone perform the backups manually? Will you utilize SAN technology, or might you want a tape robot? Backup and restoral is a critical component to think about, especially when high Service Level Objectives have been set.

Logical Access
Make sure you plan for easy logical access to each server. The last thing you want to do is drag keyboards, monitors, and mice into and out of the data center. This is especially true when you are upgrading OS, middleware, or applications for multiple servers in the data center. Instead, be sure to map out a strategy for this, via either a LAN or a traditional KVM-style access. Remote VPN access is nice whenever remote support is required.

These are some basic examples of the types of logical issues that must be properly planned.

Criteria of Operations
Now, on to the third and final core element: criteria of operations. These are the operational realities and plans that must be followed in order to meet the Service Level Objectives that have driven the data center design to this point.

The criteria of operations is the all-encompassing blueprint for the daily activities, which will affect the data center and the servers it houses. It maps out the strategy for everything from change management to application release methodologies and control. It is the very heart of every successfully run data center. It covers all applications, servers, network components and connections. It addresses provisioning, capacity planning, sparing levels for immediate repairs, and what to do in a crisis mode. It covers routine access and escalation procedures. It is the bible of your data center. It should contain the Run-book for every application and system that needs to be supported.

The criteria of operations can range from a simple document covering a small data center housing a single application server, to an online interactive series of procedures and documents that support multiple, geographically dispersed data centers, applications, and servers. Planning and building your criteria of operations requires a great deal of thought, and is the glue that ties everything back to the Service Level Objectives you started with.

If your head is spinning after reading this, don't worry. Building a robust data center can be relatively pain-free if you simply stop and think before charging forward. Understanding the components and concepts presented in this article will ultimately provide you with much more than the basic building blocks needed for an endeavor of this nature. It will provide you with a roadmap to performing the type of due-diligence needed to ensure success, whether building your own data center, helping to evaluate an existing center, or choosing/evaluating an outsource partner's data center. No kidding.

Even if a facilities professional has been chartered with building or evaluating a data center, don't be afraid to open your mouth if you observe or hear something that seems off-kilter. It's easy for anyone to mistakenly overlook an item. In the end, the server you save may be your own.

More Stories By Joe Farsetta

Joe is an engineer with over 20 years of industry experience in telecommunications, networking, operations, business process architecture, applications, and support. An entrepreneur and inventor, Joe’s past engagements have included Unilever, NJ Transit, and a Regional Directorship at Bell Atlantic Network Integration. He is currently employed by one of the world's premier Web-hosting providers, as well as operating a consultancy in the New York metropolitan area. He can be reached at XXXXXXXX.

Comments (0)

Share your thoughts on this story.

Add your comment
You must be signed in to add a comment. Sign-in | Register

In accordance with our Comment Policy, we encourage comments that are on topic, relevant and to-the-point. We will remove comments that include profanity, personal attacks, racial slurs, threats of violence, or other inappropriate material that violates our Terms and Conditions, and will block users who make repeated violations. We ask all readers to expect diversity of opinion and to treat one another with dignity and respect.

IoT & Smart Cities Stories
The platform combines the strengths of Singtel's extensive, intelligent network capabilities with Microsoft's cloud expertise to create a unique solution that sets new standards for IoT applications," said Mr Diomedes Kastanis, Head of IoT at Singtel. "Our solution provides speed, transparency and flexibility, paving the way for a more pervasive use of IoT to accelerate enterprises' digitalisation efforts. AI-powered intelligent connectivity over Microsoft Azure will be the fastest connected pat...
There are many examples of disruption in consumer space – Uber disrupting the cab industry, Airbnb disrupting the hospitality industry and so on; but have you wondered who is disrupting support and operations? AISERA helps make businesses and customers successful by offering consumer-like user experience for support and operations. We have built the world’s first AI-driven IT / HR / Cloud / Customer Support and Operations solution.
Codete accelerates their clients growth through technological expertise and experience. Codite team works with organizations to meet the challenges that digitalization presents. Their clients include digital start-ups as well as established enterprises in the IT industry. To stay competitive in a highly innovative IT industry, strong R&D departments and bold spin-off initiatives is a must. Codete Data Science and Software Architects teams help corporate clients to stay up to date with the mod...
At CloudEXPO Silicon Valley, June 24-26, 2019, Digital Transformation (DX) is a major focus with expanded DevOpsSUMMIT and FinTechEXPO programs within the DXWorldEXPO agenda. Successful transformation requires a laser focus on being data-driven and on using all the tools available that enable transformation if they plan to survive over the long term. A total of 88% of Fortune 500 companies from a generation ago are now out of business. Only 12% still survive. Similar percentages are found throug...
Druva is the global leader in Cloud Data Protection and Management, delivering the industry's first data management-as-a-service solution that aggregates data from endpoints, servers and cloud applications and leverages the public cloud to offer a single pane of glass to enable data protection, governance and intelligence-dramatically increasing the availability and visibility of business critical information, while reducing the risk, cost and complexity of managing and protecting it. Druva's...
BMC has unmatched experience in IT management, supporting 92 of the Forbes Global 100, and earning recognition as an ITSM Gartner Magic Quadrant Leader for five years running. Our solutions offer speed, agility, and efficiency to tackle business challenges in the areas of service management, automation, operations, and the mainframe.
The Jevons Paradox suggests that when technological advances increase efficiency of a resource, it results in an overall increase in consumption. Writing on the increased use of coal as a result of technological improvements, 19th-century economist William Stanley Jevons found that these improvements led to the development of new ways to utilize coal. In his session at 19th Cloud Expo, Mark Thiele, Chief Strategy Officer for Apcera, compared the Jevons Paradox to modern-day enterprise IT, examin...
With 10 simultaneous tracks, keynotes, general sessions and targeted breakout classes, @CloudEXPO and DXWorldEXPO are two of the most important technology events of the year. Since its launch over eight years ago, @CloudEXPO and DXWorldEXPO have presented a rock star faculty as well as showcased hundreds of sponsors and exhibitors! In this blog post, we provide 7 tips on how, as part of our world-class faculty, you can deliver one of the most popular sessions at our events. But before reading...
DSR is a supplier of project management, consultancy services and IT solutions that increase effectiveness of a company's operations in the production sector. The company combines in-depth knowledge of international companies with expert knowledge utilising IT tools that support manufacturing and distribution processes. DSR ensures optimization and integration of internal processes which is necessary for companies to grow rapidly. The rapid growth is possible thanks, to specialized services an...
At CloudEXPO Silicon Valley, June 24-26, 2019, Digital Transformation (DX) is a major focus with expanded DevOpsSUMMIT and FinTechEXPO programs within the DXWorldEXPO agenda. Successful transformation requires a laser focus on being data-driven and on using all the tools available that enable transformation if they plan to survive over the long term. A total of 88% of Fortune 500 companies from a generation ago are now out of business. Only 12% still survive. Similar percentages are found throug...