|By Jeremy Geelan||
|October 25, 2004 12:00 AM EDT||
IBM Response to the study entitled
"Comparing Microsoft .NET and IBM WebSphere/J2EE"
by The Middleware Company
IBM Competitive Technology Lab
Another Flawed Study
The latest Middleware Company study is flawed and does not accurately reflect the capability of WebSphere J2EE vs. Microsoft .NET. Like two previous discredited Middleware Company studies, this study was funded by Microsoft. While expert Microsoft programmers were allowed to contribute to the .NET side, neither IBM nor other WebSphere J2EE product experts were invited to contribute to the testing.
The Middleware Company publicly retracted the results of its previous J2EE vs. .NET study because of serious errors in its testing methodologies and its failure to invite J2EE vendors to participate.  This study has similar problems. The products and toolsets compared in the testing were mismatched, and relatively crude WebSphere J2EE programming techniques were used rather than the most optimal tools and programming methodologies. Statements made within the study show that the choice of tools and methodologies were purposeful.
The cost comparison between IBM and Microsoft also is inaccurate and does not represent a real-world environment. In addition, there were important cost considerations missing on the Microsoft side.
The Middleware Company's WebSphere J2EE programmers failed to use development practices that would have yielded more favorable results for WebSphere J2EE - for example, practices that would enable software reuse across enterprise architectures. The study also did not attempt to measure the advantages of team development across a business with multiple team members involved. Rather, the study timed the work of three relatively inexperienced WebSphere developers. The results are misleading in each of the categories the tests aimed to measure--Development Productivity, Tuning Productivity, Performance, Manageability, and Reliability.
In its apology and retraction letter for its last J2EE vs. .NET test, The Middleware Company wrote: "We also admit to having made an error in process judgment by inviting Microsoft to participate in this benchmark, but not inviting the J2EE vendors to participate in this benchmark." 
Inviting an expert Microsoft partner to participate while not inviting IBM or an IBM partner to participate was once again a fatal flaw. Using outside experts for the Microsoft .NET programming team while using its own employees for the WebSphere J2EE programming team is an obvious conflict of interest in a study funded by Microsoft.
We believe that if J2EE development best practices had been used, the developer productivity would have exceeded that of Visual Studio.NET. The performance and manageability of the WebSphere J2EE application also would have exceeded the published numbers.
Microsoft's track record on these types of studies is not good, and users should generally approach them with caution. Besides two previous Middleware Company studies that were retracted,  Forrester Research issued a disclaimer of its Microsoft-sponsored study on the cost of ownership of Linux vs. Windows,  and a similar IDC Linux/Windows study was discredited in Business Week by one of its authors. 
A closer look at how the tests were conducted and the results framed reveals why The Middleware Company study is without merit.
 In the Appendix is the complete text of The Middleware Company retraction letter published in November 2002.
 The Middleware Company, "A Message to the J2EE and .NET Communities," November 8, 2002.
 The initial Middleware Company Pet Shop study and the J2EE vs. .NET study cited in the retraction in the Appendix.
 "Forrester Alters Integrity Policy," CNET News.com, October 7, 2003. "Forrester published a new policy that forbids vendors that commission studies from publicizing their results. The new policy follows the release last month of a study that concluded that companies spent less when developing certain programs with the Windows operating system than with Linux. Giga polled just 12 companies for the study, which led some critics to question the report's statistical relevance."
 "Pecked by Penguins," Business Week, March 3, 2003. Says the report: "One of the study's authors accuses Microsoft of stacking the deck. IDC analyst Dan Kusnetzky says the company selected scenarios that would inevitably be more costly using Linux."
Wrong "WebSphere" Tool
One of the largest faults of the study is the focus on Rational Rapid Developer (RRD) as the primary J2EE development tool from IBM. The results for RRD are represented as "WebSphere" and RRD is characterized as the main WebSphere development tool. However, IBM has never marketed RRD as a full-blown WebSphere toolset with all the functionality of WebSphere Studio Application Developer (WSAD). WSAD is the tool intended specifically for building integrated enterprise J2EE applications.
Comparing VS.NET to RRD is not a fair comparison, nor is it the comparison IBM would have recommended. Why then did The Middleware Company characterize Rational Rapid Developer as the primary tool for WebSphere J2EE environment? The choice is particularly odd in light of the fact that they were aware that WSAD was the "mainstream" tool for WebSphere development (and that they achieved better results with WSAD). Articles in the trade press reported that the WSAD results were "unofficial" because they were not as controlled as the RRD measurements. This explanation was provided by The Middleware Company and Microsoft.
The WSAD results in this study were comparable to VS.NET in developer time spent on the project. The approach to application development in RRD is very different from WSAD, so the theory postulated in the study that the WSAD results were boosted by the experience the team gained using RRD are not credible.
The bottom line is that The Middleware Company chose to emphasize and give more weight to a tool that would not be recommended by IBM as the primary tool for WebSphere J2EE development. By focusing on RRD, the study draws attention away from the more favorable results posted by WSAD, which is IBM's premier application development tool and the direct competitor to VS.NET. Representing RRD in this way is inaccurate and misleading.
The table below summarizes the development tool comparisons made in the study:
Development Productivity .NET beats RRD .NET = WSAD WSAD beats RRD
Tuning Productivity .NET beats RRD .NET = WSAD - uncertain -
Performance .NET beats RRD .NET = WSAD WSAD beats RRD
Manageability .NET beats RRD .NET beats WSAD RRD beats WSAD
RRD = Rational Rapid Developer
WSAD = WebSphere Studio Application Developer
In order to have a fair comparison, you should make more of an attempt at having a level playing field. Here are the specific problems in this study which invalidate the results.
Unequal Programming Teams
Among the most questionable claims made in the study were that the .NET and J2EE programming teams were equal in programming experience and knowledge of their respective platforms. Having programming teams of equal skills is critical to the credibility of the end results.
The report states that three Microsoft "senior developers" were sourced from a Microsoft Solution Provider (Vertigo Software). The J2EE developers were sourced from The Middleware Company. Why were the J2EE developers not sourced from an IBM business partner that would have had some knowledge about IBM products?
There is an obvious disparity in the skill level of the .NET and WebSphere team exhibited during the actual development work. The Microsoft team exhibits amazing experience and depth of knowledge. An example is the Microsoft team's decision to code distributed transactions using "ServiceDomain classes" instead of the "COM+ catalog." ServiceDomain classes are very new, very complex, and not that well understood and documented.
Another example of the high skill level of the .NET developers was their use of "Data Access Application Block" (DAAB) for enhancement of performance. This is not a standard part of .NET and one must go out explicitly and download and install it. To decide to code the distributed transactions with ServiceDomain classes and to add DAAB for performance enhancement, and to have it work the first time, shows a deep amount of technical skill.
The WebSphere team, on the other hand, exhibited little or no experience with WebSphere Studio Application Developer. Their inexperience is obvious and is easily traced throughout the study.
The study reports (pg.37) that "the .NET team was undoubtedly more experienced with their tool than the J2EE team with theirs."
The J2EE team decided to install the Sun JVM rather than the IBM-supplied JVM 1.4.1. WebSphere Studio product requirements state to use the IBM JVM or unpredictable results can occur. The unpredictable results that occurred later (pg. 59-60) were an unrecognized parameter during garbage collection. The time it took to troubleshoot and fix the problem counted against the J2EE team. The substitution of the Sun JVM can only be explained lack of experience with WebSphere Studio. If the intent was to test how WebSphere Studio performed, why would you introduce a non-IBM JVM?
The J2EE team decided not to do any source code management (pg. 32), even though ClearCase LT and CVS were available. "Instead they divided their work carefully and copied changed source files between their machines." The .NET team used Visual SourceSafe for source control. Does the J2EE decision indicate experienced developers? Problems resulted (pg. 45) for the J2EE developers that increased their development time. It appears again that lack of experience played a role in this poor decision.
The J2EE team decided to use the Sun J2ME wireless toolkit in Sun One Studio to create an application for a mobile device that was part of the specification. The code generated from the Sun wireless toolkit did not work with the RRD-generated application. Again, this displays a lack of experience in not using the WebSphere Studio tools to interface with the mobile device.
It took the J2EE team extra time to discover that they had a SIMPLE rights (SELECT rights) user error when they went to access the Oracle database. This counted as developer productivity time and again showed lack of experience.
- The J2EE developer team used POJO (Plain Old Java Object) instead of what best practice might have indicated here - EJBs. It was noted later in the report that the team created a "framework" to handle the JDBC access. If they had used EJB's, they would have had the container handle the pooled requests and it can be argued that the performance would have been better. In the POJO scenario, a new connection had to be created for EVERY call to the database which is very expensive. In addition, they spent extra time developing a "crude" framework to handle the requests, which counted against them.
- The J2EE team chose not to use (or did not seem aware of) the more productive tools available such as Java Server Faces and Service Data Objects.
- The J2EE team specifically chose not to use the Model View Controler approach such as STRUTS technology.
Despite the fact that the three J2EE assigned developers from The Middleware Company had little or no experience with WebSphere Studio, and did not use recommended best practices, WebSphere Studio was still robust enough to allow the J2EE team to equal the results of the expert Microsoft team using VS.NET in the areas of developer productivity, configuration, tuning and performance.
Software Platform Disparity
For a fair performance comparison, the software platforms being tested should be equal. However, in addition to the lack of J2EE developer experience, different databases and operating systems were used in this test. Consider the following:
Although an attempt was made to make the hardware equivalent, that did not hold true for the base development systems. The Microsoft development team's system consisted of Windows XP and a pre-loaded SQL Server database. The J2EE team's system had Windows XP and a pre-loaded Oracle database. These two different databases were used in the production environment as well. Why were two different databases used by the two teams? At the very least, the databases should have been tuned, optimized, tested, and certified that they were performing equally. We have no information or data to indicate how they compared.
This flaw by itself should be enough to call the entire performance results into question. Before a single line of code was written, the entire study was suspect.
Linux was mandated to run the performance tests. A real performance test should compare one competitor to another with as much commonality as possible. The WebSphere J2EE tests should have been run on the same platform (Windows Server 2003) as the .NET tests.
The configurations in The Middleware Company study were restricted, which benefited Microsoft in performance vs. the J2EE application. The study did not allow for multiple-tier implementations - configurations that would mirror real-world usage. Customers like to separate physical tiers of Web applications because it gives them more flexibility for allocation of resources on a tier-by-tier basis and the ability to separate tiers for security and/or geographical concerns.
There were a number of other factors in The Middleware Company study that impeded the J2EE application's performance:
- The Edge server is not needed and adds another layer that is not required. Using the IBM HTTP Server where the Edge Server is, and deleting the two IBM HTTP Servers from the Application Server machine would suffice. This would decrease the complexity and total path length of the application, and allow more processing power for the application servers.
- The teams used different patterns to design the applications. The Microsoft team used a stored procedures pattern while the J2EE team did not. It is quite possible that better performance had been achieved on the J2EE side if it had had the same playing field.
- Heap size affects the amount of memory allocated to the Java Virtual Machine. One would want to dedicate as much memory as possible to the heap size. The heap sizes were allocated stating that the servers had 1 GB of memory each. This appears to be incorrect. The diagram on page 26 states the servers are 2 GB each, as does the CN2 Technology auditor report. A higher heap size should have been used, something like a minimum heap size of 1,256 MB and a maximum of 1,536 MB.
- The standard method of storing data in the HTTP sessions was not used. The standard method of storing session data for both should been used.
- The use of two datasources in the J2EE case affects performance negatively. The two datasources allocated were one, an XA compliant for two-phase commit; the other one was not XA compliant. Their claim was that by splitting the requests to each of the data sources, performance would improve. In theory that might be true; however, it basically precludes sharing of data connections between the two datasourses, and all the database requests going through the XA datasource would lock the tables for other transactions. If they had a single XA datasource, they could satisfy the two-phase commit and the connection pooling. Performance might have improved because of the ability to cache the connection pools.
- In general, the performance runs were done based on achieving a maximum response time of one second. The servers were not driven to maximum capacity. It is never stated what the limiting resource was on the system. A more realistic view of performance and stability is a stress run, where there is no "think time" or wait time between interactions, and the only constrained resource is the application server under test.
- On the integrated scenario, the testers conveniently chose to implement 75% of the application where Microsoft is slightly better than IBM, and only 25% of the application where IBM is better than Microsoft on a 2 to 1 ratio, when run together. If one does the math on the separate scenarios, IBM should have been 650 vs. 550 for Microsoft, yet with integrated scenario IBM is 482 vs. 855 for Microsoft. There are no details on the throughput for each application in the integrated scenario, and there is no attempt to ascertain the limiting resource(s) during the runs, yet we are expected to believe that the applications themselves ran slower because they were run concurrently on separate servers.
The study's cost assessment is inaccurate -adding costs to the IBM side while excluding costs that should have been associated to Microsoft. One such extraneous cost was the use of high-end 4-way servers, which was unnecessary and raised the WebSphere configuration cost significantly. For example, a typical configuration like the one in the study could have been deployed on three 2-way servers instead of the 4-way servers that were used.
The Middleware Company's J2EE team initially used a three-node configuration (pg. 28) but added a fourth server to run WebSphere MQ Series. The developers' lack of experience added unnecessary cost to the configuration because a JMS server ships native with the WebSphere Application server at no extra cost. This unnecessary use of hardware and software increased the cost of the WebSphere configuration by more than $200,000.
While IBM's costs were inflated, we believe the Microsoft configuration costs were understated. The Microsoft configuration was missing an extra year of Software Assurance for the Windows Server 2003 Enterprise Server license. This was due to The Middleware Company's misinterpretation of the terms of the Open Value Agreement, which is three years, not two. The Microsoft configuration should be increased by $595 per server to cover the cost of an extra year of software assurance.
In a typical configuration, users are required to authenticate before gaining access to Web applications. Each client device would require a Microsoft client access license (CAL). Client access licenses are fees Microsoft charges customers for each user or device that accesses a Microsoft Windows server and Microsoft's middleware server. These fees are costly and are in addition to the base server pricing. In Microsoft's Software Assurance licensing scheme, users pay an additional 25% of the CAL cost per year for three years. The WebSphere configuration doesn't require users to pay an extra cost for client access and also allows unlimited customer and external customer access to the middleware servers. As shown in the charts below, the inclusion of CAL cost increases the overall price of the Microsoft configuration.
Another omission was the MSDN enterprise subscription cost that reflected only one year rather than the three-year terms of the Open Value Agreement. This would add an additional $5,000 to the Microsoft pricing.
The study's assessment of pricing is skewed in other ways that favor Microsoft. For example, the Microsoft configuration is based on a nonsecure environment that would be considered a security risk in a real-world deployment. The WebSphere architecture provides native reliability and security as a standard and does not require users to pay an extra cost for secure client access.
The Microsoft configuration does not require users to authenticate against Active Directory, allowing anonymous users access to the studies' Web applications. Not using authentication or security simplified the development of the .NET application and left the environment open to the types of security risks that dominate the news headlines.
The Middleware Company's cost model understated the full Microsoft costs required for the scenario. A more thorough study of the total costs of ownership shows that Linux and WAS Express have costs slightly lower than the Microsoft solution. This is in strong contrast to their claim that WebSphere would cost ten times more than the Microsoft solution. Below is our assessment of IBM and Microsoft pricing based on a typical customer deployment.
We certainly live in interesting technological times. And no more interesting than the current competing IoT standards for connectivity. Various standards bodies, approaches, and ecosystems are vying for mindshare and positioning for a competitive edge. It is clear that when the dust settles, we will have new protocols, evolved protocols, that will change the way we interact with devices and infrastructure. We will also have evolved web protocols, like HTTP/2, that will be changing the very core of our infrastructures. At the same time, we have old approaches made new again like micro-services...
Oct. 31, 2014 07:00 AM EDT Reads: 1,611
The only place to be June 9-11 is Cloud Expo & @ThingsExpo 2015 East at the Javits Center in New York City. Join us there as delegates from all over the world come to listen to and engage with speakers & sponsors from the leading Cloud Computing, IoT & Big Data companies. Cloud Expo & @ThingsExpo are the leading events covering the booming market of Cloud Computing, IoT & Big Data for the enterprise. Speakers from all over the world will be hand-picked for their ability to explore the economic strategies that utility/cloud computing provides. Whether public, private, or in a hybrid form, clo...
Oct. 30, 2014 05:30 PM EDT Reads: 1,260
SYS-CON Events announces a new pavilion on the Cloud Expo floor where WebRTC converges with the Internet of Things. Pavilion will showcase WebRTC and the Internet of Things. The Internet of Things (IoT) is the most profound change in personal and enterprise IT since the creation of the Worldwide Web more than 20 years ago. All major researchers estimate there will be tens of billions devices--computers, smartphones, tablets, and sensors – connected to the Internet by 2020. This number will continue to grow at a rapid pace for the next several decades.
Oct. 30, 2014 05:30 PM EDT Reads: 2,182
SYS-CON Events announced today that Gridstore™, the leader in software-defined storage (SDS) purpose-built for Windows Servers and Hyper-V, will exhibit at SYS-CON's 15th International Cloud Expo®, which will take place on November 4–6, 2014, at the Santa Clara Convention Center in Santa Clara, CA. Gridstore™ is the leader in software-defined storage purpose built for virtualization that is designed to accelerate applications in virtualized environments. Using its patented Server-Side Virtual Controller™ Technology (SVCT) to eliminate the I/O blender effect and accelerate applications Gridsto...
Oct. 30, 2014 02:00 PM EDT Reads: 2,554
SYS-CON Events announced today that Red Hat, the world's leading provider of open source solutions, will exhibit at Internet of @ThingsExpo, which will take place on November 4–6, 2014, at the Santa Clara Convention Center in Santa Clara, CA. Red Hat is the world's leading provider of open source software solutions, using a community-powered approach to reliable and high-performing cloud, Linux, middleware, storage and virtualization technologies. Red Hat also offers award-winning support, training, and consulting services. As the connective hub in a global network of enterprises, partners, a...
Oct. 30, 2014 12:15 PM EDT Reads: 2,065
As the Internet of Things unfolds, mobile and wearable devices are blurring the line between physical and digital, integrating ever more closely with our interests, our routines, our daily lives. Contextual computing and smart, sensor-equipped spaces bring the potential to walk through a world that recognizes us and responds accordingly. We become continuous transmitters and receivers of data. In his session at Internet of @ThingsExpo, Andrew Bolwell, Director of Innovation for HP’s Printing and Personal Systems Group, will discuss how key attributes of mobile technology – touch input, senso...
Oct. 30, 2014 12:00 PM EDT Reads: 1,723
The Internet of Things (IoT) is making everything it touches smarter – smart devices, smart cars and smart cities. And lucky us, we’re just beginning to reap the benefits as we work toward a networked society. However, this technology-driven innovation is impacting more than just individuals. The IoT has an environmental impact as well, which brings us to the theme of this month’s #IoTuesday Twitter chat. The ability to remove inefficiencies through connected objects is driving change throughout every sector, including waste management. BigBelly Solar, located just outside of Boston, is trans...
Oct. 30, 2014 11:00 AM EDT Reads: 2,169
Connected devices and the Internet of Things are getting significant momentum in 2014. In his session at Internet of @ThingsExpo, Jim Hunter, Chief Scientist & Technology Evangelist at Greenwave Systems, will examine three key elements that together will drive mass adoption of the IoT before the end of 2015. The first element is the recent advent of robust open source protocols (like AllJoyn and WebRTC) that facilitate M2M communication. The second is broad availability of flexible, cost-effective storage designed to handle the massive surge in back-end data in a world where timely analytics...
Oct. 30, 2014 08:00 AM EDT Reads: 1,567
Internet of @ThingsExpo Silicon Valley announced on Thursday its first 12 all-star speakers and sessions for its upcoming event, which will take place November 4-6, 2014, at the Santa Clara Convention Center in California. @ThingsExpo, the first and largest IoT event in the world, debuted at the Javits Center in New York City in June 10-12, 2014 with over 6,000 delegates attending the conference. Among the first 12 announced world class speakers, IBM will present two highly popular IoT sessions, which will take place November 4-6, 2014 at the Santa Clara Convention Center in Santa Clara, Calif...
Oct. 30, 2014 07:30 AM EDT Reads: 2,266
The Internet of Things (IoT) promises to evolve the way the world does business; however, understanding how to apply it to your company can be a mystery. Most people struggle with understanding the potential business uses or tend to get caught up in the technology, resulting in solutions that fail to meet even minimum business goals. In his session at Internet of @ThingsExpo, Jesse Shiah, CEO / President / Co-Founder of AgilePoint Inc., will show what is needed to leverage the IoT to transform your business. He will discuss opportunities and challenges ahead for the IoT from a market and tec...
Oct. 29, 2014 11:00 PM EDT Reads: 1,535
SYS-CON Events announced today that TeleStax, the main sponsor of Mobicents, will exhibit at Internet of @ThingsExpo, which will take place on November 4–6, 2014, at the Santa Clara Convention Center in Santa Clara, CA. TeleStax provides Open Source Communications software and services that facilitate the shift from legacy SS7 based IN networks to IP based LTE and IMS networks hosted on private (on-premise), hybrid or public clouds. TeleStax products include Restcomm, JSLEE, SMSC Gateway, USSD Gateway, SS7 Resource Adaptors, SIP Servlets, Rich Multimedia Services, Presence Services/RCS, Diame...
Oct. 29, 2014 05:00 PM EDT Reads: 1,635
From a software development perspective IoT is about programming "things," about connecting them with each other or integrating them with existing applications. In his session at @ThingsExpo, Yakov Fain, co-founder of Farata Systems and SuranceBay, will show you how small IoT-enabled devices from multiple manufacturers can be integrated into the workflow of an enterprise application. This is a practical demo of building a framework and components in HTML/Java/Mobile technologies to serve as a platform that can integrate new devices as they become available on the market.
Oct. 29, 2014 02:15 PM EDT Reads: 2,067
SYS-CON Events announced today that O'Reilly Media has been named “Media Sponsor” of SYS-CON's 15th International Cloud Expo®, which will take place on November 4–6, 2014, at the Santa Clara Convention Center in Santa Clara, CA. O'Reilly Media spreads the knowledge of innovators through its books, online services, magazines, and conferences. Since 1978, O'Reilly Media has been a chronicler and catalyst of cutting-edge development, homing in on the technology trends that really matter and spurring their adoption by amplifying "faint signals" from the alpha geeks who are creating the future. An...
Oct. 29, 2014 01:00 PM EDT Reads: 1,610
The Transparent Cloud-computing Consortium (abbreviation: T-Cloud Consortium) will conduct research activities into changes in the computing model as a result of collaboration between "device" and "cloud" and the creation of new value and markets through organic data processing High speed and high quality networks, and dramatic improvements in computer processing capabilities, have greatly changed the nature of applications and made the storing and processing of data on the network commonplace.
Oct. 29, 2014 10:00 AM EDT Reads: 2,050
SYS-CON Events announced today that Aria Systems, the recurring revenue expert, has been named "Bronze Sponsor" of SYS-CON's 15th International Cloud Expo®, which will take place on November 4-6, 2014, at the Santa Clara Convention Center in Santa Clara, CA. Aria Systems helps leading businesses connect their customers with the products and services they love. Industry leaders like Pitney Bowes, Experian, AAA NCNU, VMware, HootSuite and many others choose Aria to power their recurring revenue business and deliver exceptional experiences to their customers.
Oct. 29, 2014 10:00 AM EDT Reads: 2,121
The Internet of Things (IoT) is going to require a new way of thinking and of developing software for speed, security and innovation. This requires IT leaders to balance business as usual while anticipating for the next market and technology trends. Cloud provides the right IT asset portfolio to help today’s IT leaders manage the old and prepare for the new. Today the cloud conversation is evolving from private and public to hybrid. This session will provide use cases and insights to reinforce the value of the network in helping organizations to maximize their company’s cloud experience.
Oct. 29, 2014 09:00 AM EDT Reads: 2,083
As a disruptive technology, Web Real-Time Communication (WebRTC), which is an emerging standard of web communications, is redefining how brands and consumers communicate in real time. The on-going narrative around WebRTC has largely been around incorporating video, audio and chat functions to apps. In his session at Internet of @ThingsExpo, Alex Gouaillard, Founder and CTO of Temasys Communications, will look at a fourth element – data channels – and talk about its potential to move WebRTC beyond browsers and into the Internet of Things.
Oct. 29, 2014 08:00 AM EDT Reads: 1,644
SYS-CON Events announced today that Gigaom Research has been named "Media Sponsor" of SYS-CON's 15th International Cloud Expo®, which will take place on November 4-6, 2014, at the Santa Clara Convention Center in Santa Clara, CA. Ashar Baig, Research Director, Cloud, at Gigaom Research, will also lead a Power Panel on the topic "Choosing the Right Cloud Option." Gigaom Research provides timely, in-depth analysis of emerging technologies for individual and corporate subscribers. Gigaom Research's network of 200+ independent analysts provides new content daily that bridges the gap between break...
Oct. 28, 2014 11:45 PM EDT Reads: 1,678
The Industrial Internet revolution is now underway, enabled by connected machines and billions of devices that communicate and collaborate. The massive amounts of Big Data requiring real-time analysis is flooding legacy IT systems and giving way to cloud environments that can handle the unpredictable workloads. Yet many barriers remain until we can fully realize the opportunities and benefits from the convergence of machines and devices with Big Data and the cloud, including interoperability, data security and privacy.
Oct. 28, 2014 12:00 PM EDT Reads: 1,953
Software AG helps organizations transform into Digital Enterprises, so they can differentiate from competitors and better engage customers, partners and employees. Using the Software AG Suite, companies can close the gap between business and IT to create digital systems of differentiation that drive front-line agility. We offer four on-ramps to the Digital Enterprise: alignment through collaborative process analysis; transformation through portfolio management; agility through process automation and integration; and visibility through intelligent business operations and big data.
Oct. 28, 2014 10:00 AM EDT Reads: 2,054