IBM Cloud Authors: Liz McMillan, Elizabeth White, Yeshim Deniz, Pat Romanski, Stefan Bernbo

Related Topics: @CloudExpo

@CloudExpo: Blog Feed Post

AppDynamics Puts the Management in Application Performance Management

The future of application performance management is in real-time visibility, action, and integration

For a very long time now APM (Application Performance Management) has been a misnomer. It’s always really been application performance monitoring, with very little management occurring outside of triggering manual processes requiring the attention of operators and developers. APM solutions have always been great at generating eye-candy reports about response time and components and, in later implementations, dependencies on application and even network infrastructure.

imageBut it has rarely been the case that APM solutions have really been about, well, managing application performance. Certainly they’ve attempted to provide the data necessary to manually manage applications and do an excellent job of correlation and even in some cases deep trouble-shooting of root-cause performance problems. But real-time, dynamic, on-demand performance management? Not so much. AppDynamics, however, is attempting to change that. Its focus is on managing the performance and availability of applications in real-time (or at least near-time) and it does so across cloud computing environments as well as internal to the data center.


One of the negatives with cloud-based applications is a lack of visibility into the supporting infrastructure. This leaves operators and administrators out of touch with root- causes for poor application performance because they are unable to “see” into the data path. While they may know from traditional monitoring solutions that response time is  degrading, it may be nearly impossible for them to determine whether that is because an intermediate infrastructure service is performing poorly or, more likely, because the application is experience heretofore unseen load. While AppDyanmics’ solution provides visibility into both scenarios, it is the latter that is most intriguing when applied to a volatile, elastic scalability scenario because not only is it capable of recognizing the situation but it can take action to redress the situation as well.


It’s all very user-configurable, so the policies applied are completely under the control of the organization, but it does provide the ability to automatically launch new instances of applications as a reaction to poor performance due to heavy load. What should be interesting about that is that it can recognize that it is, in fact, heavy load causing the performance degradation and not, say, a poorly performing upstream switch or infrastructure device. When a performance degradation is due to functionality deep inside the application, AppDynamics can drill down to the code level and specify which line(s) of code are responsible for the problem. So it’s profiling not only the network and the environment, but the very code within the application. That’s some serious visibility that generally isn’t available with even traditional APM solutions.

Also of note is the ability to interpret performance trends in the context of historical application performance. The solution is able to recognize that while a spike in load on Tuesday afternoon is normal based on historical trends, it is not typical on Monday morning and thus something is happening that needs to be addressed.


You know me, I got excited about the potential of such a solution when we started talking about integration and collaboration. AppDynamics, like many solutions today, has a couple of integration points exposed via APIs that allow the gathering of data for use in its orchestration component. That data can be extended to include user-specified data to broaden the visibility as necessary for a specific application. It could also be used, and here’s the cool part, to integrate data from Infrastructure 2.0 capable components to make even more broad its understanding of the root-cause of performance degradations.

That’s not unlike the way in which virtualization management solutions like VMware’s vSphere and Microsoft’s System Center integrate with Infrastructure 2.0 capable components so that they can utilize data that would otherwise be unavailable in the provisioning decision making processes. Imagine integrating not only detailed network and application network data with virtualization container data but also with more detailed application layer data, the kind that provides the information necessary to determine not only whether new resources are necessary to maintain performance or availability, but what kind of resources.


APM and indeed most solutions requiring visibility into the application were moving rapidly toward an agentless model just a few years ago. It was a badge of honor, in fact, that a connectivity-intelligence-dynamo solution was able to provide visibility into the application layer without deploying an agent. That’s primarily because agent-based models require additional management, testing, patching, and upgrades that are just as problematic as any other integrated solution.

But the advent of cloud and the “black box” infrastructure mentality is driving APM and other visibility and management solutions back toward an agentful model because it’s simply too difficult – in some cases impossible – to gather the required data any other way. For the same reasons emerging architectural models for cloud computing tightly couple application-aware VNA (Virtual Network Appliances) with workloads (visibility and control) it will be necessary to tightly couple APM solutions with the application through the use of agent-based technology.

Infrastructure 2.0 requires a trio of components working in concert to create a dynamic infrastructure that can scale up and down on-demand and apply the right policies at the right time to ensure applications are fast and secure. One of that trio is the application, and when it is not feasible for developers to integrate the application directly with the rest of the dynamic infrastructure fabric it makes sense to leverage an APM solution – an agent-based APM solution – to accomplish that task instead. Whether it’s direct or via an APM solution like AppDynamics, the delivery of an application is ultimately the goal of any data center architecture and thus the application is a key component of the Infrastructure 2.0 trifecta.

Related blogs & articles:

Follow me on Twitter View Lori's profile on SlideShare friendfeed icon_facebook

AddThis Feed Button Bookmark and Share

Read the original blog entry...

More Stories By Lori MacVittie

Lori MacVittie is responsible for education and evangelism of application services available across F5’s entire product suite. Her role includes authorship of technical materials and participation in a number of community-based forums and industry standards organizations, among other efforts. MacVittie has extensive programming experience as an application architect, as well as network and systems development and administration expertise. Prior to joining F5, MacVittie was an award-winning Senior Technology Editor at Network Computing Magazine, where she conducted product research and evaluation focused on integration with application and network architectures, and authored articles on a variety of topics aimed at IT professionals. Her most recent area of focus included SOA-related products and architectures. She holds a B.S. in Information and Computing Science from the University of Wisconsin at Green Bay, and an M.S. in Computer Science from Nova Southeastern University.

IoT & Smart Cities Stories
DXWorldEXPO LLC announced today that Big Data Federation to Exhibit at the 22nd International CloudEXPO, colocated with DevOpsSUMMIT and DXWorldEXPO, November 12-13, 2018 in New York City. Big Data Federation, Inc. develops and applies artificial intelligence to predict financial and economic events that matter. The company uncovers patterns and precise drivers of performance and outcomes with the aid of machine-learning algorithms, big data, and fundamental analysis. Their products are deployed...
Dynatrace is an application performance management software company with products for the information technology departments and digital business owners of medium and large businesses. Building the Future of Monitoring with Artificial Intelligence. Today we can collect lots and lots of performance data. We build beautiful dashboards and even have fancy query languages to access and transform the data. Still performance data is a secret language only a couple of people understand. The more busine...
All in Mobile is a place where we continually maximize their impact by fostering understanding, empathy, insights, creativity and joy. They believe that a truly useful and desirable mobile app doesn't need the brightest idea or the most advanced technology. A great product begins with understanding people. It's easy to think that customers will love your app, but can you justify it? They make sure your final app is something that users truly want and need. The only way to do this is by ...
CloudEXPO | DevOpsSUMMIT | DXWorldEXPO are the world's most influential, independent events where Cloud Computing was coined and where technology buyers and vendors meet to experience and discuss the big picture of Digital Transformation and all of the strategies, tactics, and tools they need to realize their goals. Sponsors of DXWorldEXPO | CloudEXPO benefit from unmatched branding, profile building and lead generation opportunities.
Digital Transformation and Disruption, Amazon Style - What You Can Learn. Chris Kocher is a co-founder of Grey Heron, a management and strategic marketing consulting firm. He has 25+ years in both strategic and hands-on operating experience helping executives and investors build revenues and shareholder value. He has consulted with over 130 companies on innovating with new business models, product strategies and monetization. Chris has held management positions at HP and Symantec in addition to ...
The challenges of aggregating data from consumer-oriented devices, such as wearable technologies and smart thermostats, are fairly well-understood. However, there are a new set of challenges for IoT devices that generate megabytes or gigabytes of data per second. Certainly, the infrastructure will have to change, as those volumes of data will likely overwhelm the available bandwidth for aggregating the data into a central repository. Ochandarena discusses a whole new way to think about your next...
Cell networks have the advantage of long-range communications, reaching an estimated 90% of the world. But cell networks such as 2G, 3G and LTE consume lots of power and were designed for connecting people. They are not optimized for low- or battery-powered devices or for IoT applications with infrequently transmitted data. Cell IoT modules that support narrow-band IoT and 4G cell networks will enable cell connectivity, device management, and app enablement for low-power wide-area network IoT. B...
The hierarchical architecture that distributes "compute" within the network specially at the edge can enable new services by harnessing emerging technologies. But Edge-Compute comes at increased cost that needs to be managed and potentially augmented by creative architecture solutions as there will always a catching-up with the capacity demands. Processing power in smartphones has enhanced YoY and there is increasingly spare compute capacity that can be potentially pooled. Uber has successfully ...
SYS-CON Events announced today that CrowdReviews.com has been named “Media Sponsor” of SYS-CON's 22nd International Cloud Expo, which will take place on June 5–7, 2018, at the Javits Center in New York City, NY. CrowdReviews.com is a transparent online platform for determining which products and services are the best based on the opinion of the crowd. The crowd consists of Internet users that have experienced products and services first-hand and have an interest in letting other potential buye...
When talking IoT we often focus on the devices, the sensors, the hardware itself. The new smart appliances, the new smart or self-driving cars (which are amalgamations of many ‘things'). When we are looking at the world of IoT, we should take a step back, look at the big picture. What value are these devices providing. IoT is not about the devices, its about the data consumed and generated. The devices are tools, mechanisms, conduits. This paper discusses the considerations when dealing with the...