Welcome!

Websphere Authors: Elizabeth White, Yeshim Deniz, Liz McMillan, Roger Strukhoff, Rajiv V. Joshi

Related Topics: Websphere, Java, XML, SOA & WOA, Linux, IT SOLUTIONS GUIDE, Eclipse

Websphere: Article

Breaking News: New Internal IBM Report Says "Another Flawed Study"

IBM Response to Study "Comparing Microsoft .NET and IBM WebSphere/J2EE?"

IBM Response to the study entitled

"Comparing Microsoft .NET and IBM WebSphere/J2EE"

by The Middleware Company

 

October 2004

IBM Competitive Technology Lab 

 

Another Flawed Study

 

The latest Middleware Company study is flawed and does not accurately reflect the capability of WebSphere J2EE vs. Microsoft .NET. Like two previous discredited Middleware Company studies, this study was funded by Microsoft. While expert Microsoft programmers were allowed to contribute to the .NET side, neither IBM nor other WebSphere J2EE product experts were invited to contribute to the testing.

 

The Middleware Company publicly retracted the results of its previous J2EE vs. .NET study because of serious errors in its testing methodologies and its failure to invite J2EE vendors to participate. [1] This study has similar problems. The products and toolsets compared in the testing were mismatched, and relatively crude WebSphere J2EE programming techniques were used rather than the most optimal tools and programming methodologies. Statements made within the study show that the choice of tools and methodologies were purposeful.

 

The cost comparison between IBM and Microsoft also is inaccurate and does not represent a real-world environment. In addition, there were important cost considerations missing on the Microsoft side.

 

The Middleware Company's WebSphere J2EE programmers failed to use development practices that would have yielded more favorable results for WebSphere J2EE - for example, practices that would enable software reuse across enterprise architectures. The study also did not attempt to measure the advantages of team development across a business with multiple team members involved. Rather, the study timed the work of three relatively inexperienced WebSphere developers. The results are misleading in each of the categories the tests aimed to measure--Development Productivity, Tuning Productivity, Performance, Manageability, and Reliability.

 

In its apology and retraction letter for its last J2EE vs. .NET test, The Middleware Company wrote: "We also admit to having made an error in process judgment by inviting Microsoft to participate in this benchmark, but not inviting the J2EE vendors to participate in this benchmark." [2]

 

Inviting an expert Microsoft partner to participate while not inviting IBM or an IBM partner to participate was once again a fatal flaw. Using outside experts for the Microsoft .NET programming team while using its own employees for the WebSphere J2EE programming team is an obvious conflict of interest in a study funded by Microsoft.

 

We believe that if J2EE development best practices had been used, the developer productivity would have exceeded that of Visual Studio.NET. The performance and manageability of the WebSphere J2EE application also would have exceeded the published numbers.

 

Microsoft's track record on these types of studies is not good, and users should generally approach them with caution. Besides two previous Middleware Company studies that were retracted, [3] Forrester Research issued a disclaimer of its Microsoft-sponsored study on the cost of ownership of Linux vs. Windows, [4] and a similar IDC Linux/Windows study was discredited in Business Week by one of its authors. [5]

 

A closer look at how the tests were conducted and the results framed reveals why The Middleware Company study is without merit. 


[1] In the Appendix is the complete text of The Middleware Company retraction letter published in November 2002.

[2] The Middleware Company, "A Message to the J2EE and .NET Communities," November 8, 2002.

 

[3] The initial Middleware Company Pet Shop study and the J2EE vs. .NET study cited in the retraction in the Appendix.

[4] "Forrester Alters Integrity Policy," CNET News.com, October 7, 2003. "Forrester published a new policy that forbids vendors that commission studies from publicizing their results. The new policy follows the release last month of a study that concluded that companies spent less when developing certain programs with the Windows operating system than with Linux. Giga polled just 12 companies for the study, which led some critics to question the report's statistical relevance."

[5] "Pecked by Penguins," Business Week, March 3, 2003. Says the report: "One of the study's authors accuses Microsoft of stacking the deck. IDC analyst Dan Kusnetzky says the company selected scenarios that would inevitably be more costly using Linux."


Wrong "WebSphere" Tool

 

One of the largest faults of the study is the focus on Rational Rapid Developer (RRD) as the primary J2EE development tool from IBM. The results for RRD are represented as "WebSphere" and RRD is characterized as the main WebSphere development tool. However, IBM has never marketed RRD as a full-blown WebSphere toolset with all the functionality of WebSphere Studio Application Developer (WSAD). WSAD is the tool intended specifically for building integrated enterprise J2EE applications.

 

Comparing VS.NET to RRD is not a fair comparison, nor is it the comparison IBM would have recommended. Why then did The Middleware Company characterize Rational Rapid Developer as the primary tool for WebSphere J2EE environment? The choice is particularly odd in light of the fact that they were aware that WSAD was the "mainstream" tool for WebSphere development (and that they achieved better results with WSAD). Articles in the trade press reported that the WSAD results were "unofficial" because they were not as controlled as the RRD measurements. This explanation was provided by The Middleware Company and Microsoft.

 

The WSAD results in this study were comparable to VS.NET in developer time spent on the project. The approach to application development in RRD is very different from WSAD, so the theory postulated in the study that the WSAD results were boosted by the experience the team gained using RRD are not credible.

 

The bottom line is that The Middleware Company chose to emphasize and give more weight to a tool that would not be recommended by IBM as the primary tool for WebSphere J2EE development. By focusing on RRD, the study draws attention away from the more favorable results posted by WSAD, which is IBM's premier application development tool and the direct competitor to VS.NET. Representing RRD in this way is inaccurate and misleading.

 

The table below summarizes the development tool comparisons made in the study:

________________________

Development Productivity            .NET beats RRD            .NET  = WSAD              WSAD beats RRD

Tuning Productivity                     .NET beats RRD            .NET  = WSAD              - uncertain -

Performance                              .NET beats RRD            .NET  = WSAD              WSAD beats RRD        

Manageability                            .NET beats RRD            .NET beats WSAD         RRD beats WSAD

 

RRD = Rational Rapid Developer

WSAD = WebSphere Studio Application Developer

_______________________

 

In order to have a fair comparison, you should make more of an attempt at having a level playing field. Here are the specific problems in this study which invalidate the results.

 

Unequal Programming Teams

 

Among the most questionable claims made in the study were that the .NET and J2EE programming teams were equal in programming experience and knowledge of their respective platforms. Having programming teams of equal skills is critical to the credibility of the end results.

 

The report states that three Microsoft "senior developers" were sourced from a Microsoft Solution Provider (Vertigo Software). The J2EE developers were sourced from The Middleware Company. Why were the J2EE developers not sourced from an IBM business partner that would have had some knowledge about IBM products?

 

There is an obvious disparity in the skill level of the .NET and WebSphere team exhibited during the actual development work. The Microsoft team exhibits amazing experience and depth of knowledge. An example is the Microsoft team's decision to code distributed transactions using "ServiceDomain classes" instead of  the "COM+ catalog." ServiceDomain classes are very new, very complex, and not that well understood and documented.

 

Another example of the high skill level of the .NET developers was their use of "Data Access Application Block" (DAAB) for enhancement of performance. This is not a standard part of .NET and one must go out explicitly and download and install it. To decide to code the distributed transactions with ServiceDomain classes and to add DAAB for performance enhancement, and to have it work the first time, shows a deep amount of technical skill.

 

The WebSphere team, on the other hand, exhibited little or no experience with WebSphere Studio Application Developer. Their inexperience is obvious and is easily traced throughout the study.

  • The study reports (pg.37) that "the .NET team was undoubtedly more experienced with their tool than the J2EE team with theirs."
  • The J2EE team decided to install the Sun JVM rather than the IBM-supplied JVM 1.4.1. WebSphere Studio product requirements state to use the IBM JVM or unpredictable results can occur. The unpredictable results that occurred later (pg. 59-60) were an unrecognized parameter during garbage collection. The time it took to troubleshoot and fix the problem counted against the J2EE team. The substitution of the Sun JVM can only be explained lack of experience with WebSphere Studio. If the intent was to test how WebSphere Studio performed, why would you introduce a non-IBM JVM?
  •  The J2EE team decided not to do any source code management (pg. 32), even though ClearCase LT and CVS were available. "Instead they divided their work carefully and copied changed source files between their machines." The .NET team used Visual SourceSafe for source control. Does the J2EE decision indicate experienced developers?  Problems resulted (pg. 45) for the J2EE developers that increased their development time. It appears again that lack of experience played a role in this poor decision.
  •  The J2EE team decided to use the Sun J2ME wireless toolkit in Sun One Studio to create an application for a mobile device that was part of the specification. The code generated from the Sun wireless toolkit did not work with the RRD-generated application. Again, this displays a lack of experience in not using the WebSphere Studio tools to interface with the mobile device.

  • It took the J2EE team extra time to discover that they had a SIMPLE rights (SELECT rights) user error when they went to access the Oracle database. This counted as developer productivity time and again showed lack of  experience.
  • The J2EE developer team used POJO (Plain Old Java Object) instead of what best practice might have indicated here - EJBs. It was noted later in the report that the team created a "framework" to handle the JDBC access. If they had used EJB's, they would have had the container handle the pooled requests and it can be argued that the performance would have been better. In the POJO scenario, a new connection had to be created for EVERY call to the database which is very expensive. In addition, they spent extra time developing a "crude" framework to handle the requests, which counted against them.
  • The J2EE team chose not to use (or did not seem aware of) the more productive tools available such as Java Server Faces and Service Data Objects.
  • The J2EE team specifically chose not to use the Model View Controler approach such as STRUTS technology.

 

Despite the fact that the three J2EE assigned developers from The Middleware Company had little or no experience with WebSphere Studio, and did not use recommended best practices, WebSphere Studio was still robust enough to allow the J2EE team to equal the results of the expert Microsoft team using VS.NET in the areas of developer productivity, configuration, tuning and performance.

 

Software Platform Disparity

 

For a fair performance comparison, the software platforms being tested should be equal. However, in addition to the lack of J2EE developer experience, different databases and operating systems were used in this test. Consider the following:

  •  Although an attempt was made to make the hardware equivalent, that did not hold true for the base development systems. The Microsoft development team's system consisted of Windows XP and a pre-loaded SQL Server database. The J2EE team's system had Windows XP and a pre-loaded Oracle database. These two different databases were used in the production environment as well. Why were two different databases used by the two teams? At the very least, the databases should have been tuned, optimized, tested, and certified that they were performing equally. We have no information or data to indicate how they compared.                   

This flaw by itself should be enough to call the entire performance results into question. Before a single line of code was written, the entire study was suspect.

  • Linux was mandated to run the performance tests. A real performance test should compare one competitor to another with as much commonality as possible. The WebSphere J2EE tests should have been run on the same platform (Windows Server 2003) as the .NET tests.

 

Single-Tier Restrictions

 

The configurations in The Middleware Company study were restricted, which benefited Microsoft in performance vs. the J2EE application. The study did not allow for multiple-tier implementations - configurations that would mirror real-world usage. Customers like to separate physical tiers of Web applications because it gives them more flexibility for allocation of resources on a tier-by-tier basis and the ability to separate tiers for security and/or geographical concerns.

 

There were a number of other factors in The Middleware Company study that impeded the J2EE application's performance:

 

  • The Edge server is not needed and adds another layer that is not required. Using the IBM HTTP Server where the Edge Server is, and deleting the two IBM HTTP Servers from the Application Server machine would suffice. This would decrease the complexity and total path length of the application, and allow more processing power for the application servers.

  • The teams used different patterns to design the applications. The Microsoft team used a stored procedures pattern while the J2EE team did not. It is quite possible that better performance had been achieved on the J2EE side if it had had the same playing field.

  • Heap size affects the amount of memory allocated to the Java Virtual Machine. One would want to dedicate as much memory as possible to the heap size. The heap sizes were allocated stating that the servers had 1 GB of memory each. This appears to be incorrect. The diagram on page 26 states the servers are 2 GB each, as does the CN2 Technology auditor report. A higher heap size should have been used, something like a minimum heap size of 1,256 MB and a maximum of 1,536 MB.

  • The standard method of storing data in the HTTP sessions was not used. The standard method of storing session data for both should been used.

  • The use of two datasources in the J2EE case affects performance negatively. The two datasources allocated were one, an XA compliant for two-phase commit; the other one was not XA compliant. Their claim was that by splitting the requests to each of the data sources, performance would improve. In theory that might be true; however, it basically precludes sharing of data connections between the two datasourses, and all the database requests going through the XA datasource would lock the tables for other transactions. If they had a single XA datasource, they could satisfy the two-phase commit and the connection pooling. Performance might have improved because of the ability to cache the connection pools.

  • In general, the performance runs were done based on achieving a maximum response time of one second. The servers were not driven to maximum capacity.  It is never stated what the limiting resource was on the system. A more realistic view of performance and stability is a stress run, where there is no "think time" or wait time between interactions, and the only constrained resource is the application server under test.

  • On the integrated scenario, the testers conveniently chose to implement 75% of the application where Microsoft is slightly better than IBM, and only 25% of the application where IBM is better than Microsoft on a 2 to 1 ratio, when run together. If one does the math on the separate scenarios, IBM should have been 650 vs. 550 for Microsoft, yet with integrated scenario IBM is 482 vs. 855 for Microsoft.  There are no details on the throughput for each application in the integrated scenario, and there is no attempt to ascertain the limiting resource(s) during the runs, yet we are expected to believe that the applications themselves ran slower because they were run concurrently on separate servers. 

Cost Manipulations

 

The study's cost assessment is inaccurate -adding costs to the IBM side while excluding costs that should have been associated to Microsoft. One such extraneous cost was the use of high-end 4-way servers, which was unnecessary and raised the WebSphere configuration cost significantly.  For example, a typical configuration like the one in the study could have been deployed on three 2-way servers instead of the 4-way servers that were used.

 

The Middleware Company's J2EE team initially used a three-node configuration (pg. 28) but added a fourth server to run WebSphere MQ Series.  The developers' lack of experience added unnecessary cost to the configuration because a JMS server ships native with the WebSphere Application server at no extra cost.  This unnecessary use of hardware and software increased the cost of the WebSphere configuration by more than $200,000.

 

While IBM's costs were inflated, we believe the Microsoft configuration costs were understated.  The Microsoft configuration was missing an extra year of Software Assurance for the Windows Server 2003 Enterprise Server license.  This was due to The Middleware Company's misinterpretation of the terms of the Open Value Agreement, which is three years, not two.  The Microsoft configuration should be increased by $595 per server to cover the cost of an extra year of software assurance.

 

 

In a typical configuration, users are required to authenticate before gaining access to Web applications. Each client device would require a Microsoft client access license (CAL).  Client access licenses are fees Microsoft charges customers for each user or device that accesses a Microsoft Windows server and Microsoft's middleware server. These fees are costly and are in addition to the base server pricing.  In Microsoft's Software Assurance licensing scheme, users pay an additional 25% of the CAL cost per year for three years.  The WebSphere configuration doesn't require users to pay an extra cost for client access and also allows unlimited customer and external customer access to the middleware servers. As shown in the charts below, the inclusion of CAL cost increases the overall price of the Microsoft configuration.

 

Another omission was the MSDN enterprise subscription cost that reflected only one year rather than the three-year terms of the Open Value Agreement.  This would add an additional $5,000 to the Microsoft pricing.

 

The study's assessment of pricing is skewed in other ways that favor Microsoft.  For example, the Microsoft configuration is based on a nonsecure environment that would be considered a security risk in a real-world deployment. The WebSphere architecture provides native reliability and security as a standard and does not require users to pay an extra cost for secure client access.

 

The Microsoft configuration does not require users to authenticate against Active Directory, allowing anonymous users access to the studies' Web applications. Not using authentication or security simplified the development of the .NET application and left the environment open to the types of security risks that dominate the news headlines.

 

The Middleware Company's cost model understated the full Microsoft costs required for the scenario.  A more thorough study of the total costs of ownership shows that Linux and WAS Express have costs slightly lower than the Microsoft solution.  This is in strong contrast to their claim that WebSphere would cost ten times more than the Microsoft solution. Below is our assessment of IBM and Microsoft pricing based on a typical customer deployment.

 

...continued on page 2

More Stories By Jeremy Geelan

Jeremy Geelan is Chairman & CEO of the 21st Century Internet Group, Inc. and an Executive Academy Member of the International Academy of Digital Arts & Sciences. Formerly he was President & COO at Cloud Expo, Inc. and Conference Chair of the worldwide Cloud Expo series. He appears regularly at conferences and trade shows, speaking to technology audiences across six continents. You can follow him on twitter: @jg21.

Comments (18) View Comments

Share your thoughts on this story.

Add your comment
You must be signed in to add a comment. Sign-in | Register

In accordance with our Comment Policy, we encourage comments that are on topic, relevant and to-the-point. We will remove comments that include profanity, personal attacks, racial slurs, threats of violence, or other inappropriate material that violates our Terms and Conditions, and will block users who make repeated violations. We ask all readers to expect diversity of opinion and to treat one another with dignity and respect.


Most Recent Comments
Tabby Rasah 11/03/04 12:04:37 AM EST

Is the Middleware Company really independent? I think they're a subsidiary of Veritas--not that they are a platform vendor, but you have to figure there's some cross-interests there.

Movie Lover 10/31/04 12:20:23 PM EST

What is TMC? The Movie Channel?

Al 10/28/04 11:43:58 PM EDT

This so-called "leaked" report is as questionable as the .NET/J2EE comparison including the claim that even with the inexperienced J2EE developers, they were still more productive than the experienced .NET developers using WebSphere. C'mon, SysCon, stop wasting our time and perpetuating this debate publishing this dribble. You might as well be publishing articles to pursuade people to change their political leanings.

Edu 10/28/04 04:36:52 PM EDT

Chris, it doesn't metter what 'staff' you like more. It metter what technology is more suitable for which task. I would like to see someone who had made Windows drivers with Java?

Mario, there isn't a question about Microsoft trying to strengthen .NET position in 'MS native' market. MS try to put .NET based solutions into J2EE market. This is why they participate in this study. This is a MS way marketing.

Consultant 10/28/04 11:27:15 AM EDT

I find it hard to believe that a J2EE developer worth their weight would design their own framework, not use struts, not use connection pooling, and not at least use session EJB's in the design for this application. There's no benchmarking of the databases, and you're telling me that the results are not based on stress tests? What industry do they work in? I've consulted and assisted with implementations at many Fortune 500 companies, and I can tell you that the tests were not anything like what happens in the real world.

Mario 10/27/04 03:25:31 PM EDT

It is amazing how Microsoft is so scared of J2EE that they have to cheapen themselves by taking part in the study. If .Net is as great as they profess it to be, I am sure that more companies would be using it.

It is just like politics - say anything to ruin your competitor and the common un-educated folks will believe it.

With the money that Microsoft has to throw around it is amazing that everything we use is not running some kind of Microsoft OS on it.

Software Architect 10/27/04 12:36:01 PM EDT

You cannot take any of these comparisons seriously. The fact that the J2EE project was developed by non-experts is a plus in my eyes since most projects do not have so called "experts" on it. I never take development time comparisons seriously either since that is always dependent on the team and the project. So which tools were used are almost irrelevant to me. In fact, any platform dependent on a specific tool for development is a poorly designed platform. More important is how the platform performs, so the fact that the IBM JVM was not used is probably the most valid complaint. The fact that the DAAB was used for the .NET and has to be downloaded is a petty complaint. IBM has tons of tools downloadable from their site to aid development as well. I'd like to see a test where two non-expert teams can pick whatever tools they like, use any generally available software from the vendors, and given a specific amount of time to attach the same problem.

WebSphere Lover 10/27/04 09:14:22 AM EDT

Sure Microsoft should sponsor the report like they did and the clowns at TMC should announce the results as an independent study. Way to go Chris

Chris 10/27/04 09:07:41 AM EDT

Sounds like the old, it's not the tools its the programmer who uses them issue. True enough in most circumstances I suppose. Then again, I have no doubt that an newbie coder could be productive with .Net in a much shorter time. I love the Java stuff tools and mindset, but the .Net stuff is by far much easier to develop with.

Mike Driver 10/27/04 07:38:55 AM EDT

The damage is done now, even if the report is discredited.
IBM could respond by sponsoring it's own impartial study using experts from both sides of the arguement. But that would probably end up being biased in favour of Websphere/J2EE.
It would take a truely impartial group to create a truely impartial study... but where is the incentive.

Ben 10/27/04 07:16:31 AM EDT

Cameron, this is in response to a *new* TMC study, not the original one in 2002.

malcolm davis 10/27/04 12:28:35 AM EDT

I read the Middleware report, and the details. After investing a great deal of time in both J2EE and .NET, I knew the report was serious flawed.

The TMC report also led you to believe that they had invited IBM developers. The Middleware had demonstrated that it is nothing more than a political hack. Unfortunately, damage is done when reports like TMC are published at legitimate sites.

What should happen now? Legitimate sites should ban printing anything from TMC, even if TMC has genuine news. The only way to kill a political hack is to cut its output venues.

Cameron 10/26/04 07:52:21 PM EDT

odd that it takes IBM two years to get a response out ..

Matthew Quinlan 10/26/04 06:26:03 PM EDT

Interesting. It is easy to throw stones at any benchmark or comparison study. However, it is a much more difficult task to actually design and conduct one. While the funding by Microsoft is clearly a conflict of interest, I would expect that even a truly independent comparison would generate many of the same inconsistencies. The fact is that these kinds of comparisons must take into account so many variables that have their own tradeoffs (and no clear corresponding variable on the competitor's platform) that the margin of error is larger than the absolute results.

This type of analysis is therefore relegated to for-hire companies (e.g. TMC) who provide fodder for customer executives who wish to defend their existing or upcoming technology decisions.

Alexander 10/26/04 06:16:11 PM EDT

The whole buzz in this article mainly shows one thing - it doesn't matter what tool and environment is used, the question is who is using it.
Use bad programmer and you get bad application.
However one more thing is also evident. The dot.com boom dropped to the market a lot of half baked programmers and most of them know the Java as their only tool.

Scott Simmons 10/26/04 11:13:54 AM EDT

I find it a very large concern when an "independent" group (The Middleware Company) professes vendor neautrality and then publishes a report which would not even qualify for an undergraduate Computer Science class homework assignment -- it is clear that they are not acting in the best interest of the market when they publish these types of reports. It could be said that they are only harming themselves (as per limiting their credibility) but I take exception as lead decision makers depend on these types of reports to make critical business decisions. Makes you wonder about how the report has been funded ... and brings to mind the ongoing LINUX versus Windows ramblings that Microsoft is pushing. You would think that Microsoft should be spending money on fixing the huge number of security glitches (that still remain after SP2) versus engaging it what looks like a well-funded attack campaign. It actually reminds me of the political election process we are witnessing across the US ...

Heath 10/26/04 05:11:09 AM EDT

Yes lets make sure all comparisons are actually equal. The Vendor interests must be taken into account in a way that enables true best practise and (as close as possible to) real-world deployment.

By the same token, vendor interests that unnecessarily bias against others must be avoided or, if that is not possible, clearly identified and 'costed' into the comparisons.

I wonder how TMC would feel about an orgainsation committing itself to a big project based on their 'test' results with open slather to sue them for each discovery that was contrary to their findings? That is the real test.

(I'm not troubled by the '3 fonts', but your link to Table 2 for the Microsoft costings is wrong - it points to the IBM Table. It is important to see the MS details as well)

sa 10/26/04 02:36:03 AM EDT

Completely ignoring the content of the article, why do you have to publish this with so many fonts?!?! There has to be at least 3 sizes and two faces in use throughout this article. I expected better....

@ThingsExpo Stories
Software AG helps organizations transform into Digital Enterprises, so they can differentiate from competitors and better engage customers, partners and employees. Using the Software AG Suite, companies can close the gap between business and IT to create digital systems of differentiation that drive front-line agility. We offer four on-ramps to the Digital Enterprise: alignment through collaborative process analysis; transformation through portfolio management; agility through process automation and integration; and visibility through intelligent business operations and big data.
There will be 50 billion Internet connected devices by 2020. Today, every manufacturer has a propriety protocol and an app. How do we securely integrate these "things" into our lives and businesses in a way that we can easily control and manage? Even better, how do we integrate these "things" so that they control and manage each other so our lives become more convenient or our businesses become more profitable and/or safe? We have heard that the best interface is no interface. In his session at Internet of @ThingsExpo, Chris Matthieu, Co-Founder & CTO at Octoblu, Inc., will discuss how these devices generate enough data to learn our behaviors and simplify/improve our lives. What if we could connect everything to everything? I'm not only talking about connecting things to things but also systems, cloud services, and people. Add in a little machine learning and artificial intelligence and now we have something interesting...
Last week, while in San Francisco, I used the Uber app and service four times. All four experiences were great, although one of the drivers stopped for 30 seconds and then left as I was walking up to the car. He must have realized I was a blogger. None the less, the next car was just a minute away and I suffered no pain. In this article, my colleague, Ved Sen, Global Head, Advisory Services Social, Mobile and Sensors at Cognizant shares his experiences and insights.
We are reaching the end of the beginning with WebRTC and real systems using this technology have begun to appear. One challenge that faces every WebRTC deployment (in some form or another) is identity management. For example, if you have an existing service – possibly built on a variety of different PaaS/SaaS offerings – and you want to add real-time communications you are faced with a challenge relating to user management, authentication, authorization, and validation. Service providers will want to use their existing identities, but these will have credentials already that are (hopefully) irreversibly encoded. In his session at Internet of @ThingsExpo, Peter Dunkley, Technical Director at Acision, will look at how this identity problem can be solved and discuss ways to use existing web identities for real-time communication.
Can call centers hang up the phones for good? Intuitive Solutions did. WebRTC enabled this contact center provider to eliminate antiquated telephony and desktop phone infrastructure with a pure web-based solution, allowing them to expand beyond brick-and-mortar confines to a home-based agent model. It also ensured scalability and better service for customers, including MUY! Companies, one of the country's largest franchise restaurant companies with 232 Pizza Hut locations. This is one example of WebRTC adoption today, but the potential is limitless when powered by IoT. Attendees will learn real-world benefits of WebRTC and explore future possibilities, as WebRTC and IoT intersect to improve customer service.
From telemedicine to smart cars, digital homes and industrial monitoring, the explosive growth of IoT has created exciting new business opportunities for real time calls and messaging. In his session at Internet of @ThingsExpo, Ivelin Ivanov, CEO and Co-Founder of Telestax, will share some of the new revenue sources that IoT created for Restcomm – the open source telephony platform from Telestax. Ivelin Ivanov is a technology entrepreneur who founded Mobicents, an Open Source VoIP Platform, to help create, deploy, and manage applications integrating voice, video and data. He is the co-founder of TeleStax, an Open Source Cloud Communications company that helps the shift from legacy IN/SS7 telco networks to IP-based cloud comms. An early investor in multiple start-ups, he still finds time to code for his companies and contribute to open source projects.
The Internet of Things (IoT) promises to create new business models as significant as those that were inspired by the Internet and the smartphone 20 and 10 years ago. What business, social and practical implications will this phenomenon bring? That's the subject of "Monetizing the Internet of Things: Perspectives from the Front Lines," an e-book released today and available free of charge from Aria Systems, the leading innovator in recurring revenue management.
The Internet of Things will put IT to its ultimate test by creating infinite new opportunities to digitize products and services, generate and analyze new data to improve customer satisfaction, and discover new ways to gain a competitive advantage across nearly every industry. In order to help corporate business units to capitalize on the rapidly evolving IoT opportunities, IT must stand up to a new set of challenges.
There’s Big Data, then there’s really Big Data from the Internet of Things. IoT is evolving to include many data possibilities like new types of event, log and network data. The volumes are enormous, generating tens of billions of logs per day, which raise data challenges. Early IoT deployments are relying heavily on both the cloud and managed service providers to navigate these challenges. In her session at 6th Big Data Expo®, Hannah Smalltree, Director at Treasure Data, to discuss how IoT, Big Data and deployments are processing massive data volumes from wearables, utilities and other machines.
All major researchers estimate there will be tens of billions devices – computers, smartphones, tablets, and sensors – connected to the Internet by 2020. This number will continue to grow at a rapid pace for the next several decades. With major technology companies and startups seriously embracing IoT strategies, now is the perfect time to attend @ThingsExpo in Silicon Valley. Learn what is going on, contribute to the discussions, and ensure that your enterprise is as "IoT-Ready" as it can be!
P2P RTC will impact the landscape of communications, shifting from traditional telephony style communications models to OTT (Over-The-Top) cloud assisted & PaaS (Platform as a Service) communication services. The P2P shift will impact many areas of our lives, from mobile communication, human interactive web services, RTC and telephony infrastructure, user federation, security and privacy implications, business costs, and scalability. In his session at Internet of @ThingsExpo, Erik Lagerway, Co-founder of Hookflash, will walk through the shifting landscape of traditional telephone and voice services to the modern P2P RTC era of OTT cloud assisted services.
While great strides have been made relative to the video aspects of remote collaboration, audio technology has basically stagnated. Typically all audio is mixed to a single monaural stream and emanates from a single point, such as a speakerphone or a speaker associated with a video monitor. This leads to confusion and lack of understanding among participants especially regarding who is actually speaking. Spatial teleconferencing introduces the concept of acoustic spatial separation between conference participants in three dimensional space. This has been shown to significantly improve comprehension and conference efficiency.
The Internet of Things is tied together with a thin strand that is known as time. Coincidentally, at the core of nearly all data analytics is a timestamp. When working with time series data there are a few core principles that everyone should consider, especially across datasets where time is the common boundary. In his session at Internet of @ThingsExpo, Jim Scott, Director of Enterprise Strategy & Architecture at MapR Technologies, will discuss single-value, geo-spatial, and log time series data. By focusing on enterprise applications and the data center, he will use OpenTSDB as an example to explain some of these concepts including when to use different storage models.
SYS-CON Events announced today that Gridstore™, the leader in software-defined storage (SDS) purpose-built for Windows Servers and Hyper-V, will exhibit at SYS-CON's 15th International Cloud Expo®, which will take place on November 4–6, 2014, at the Santa Clara Convention Center in Santa Clara, CA. Gridstore™ is the leader in software-defined storage purpose built for virtualization that is designed to accelerate applications in virtualized environments. Using its patented Server-Side Virtual Controller™ Technology (SVCT) to eliminate the I/O blender effect and accelerate applications Gridstore delivers vmOptimized™ Storage that self-optimizes to each application or VM across both virtual and physical environments. Leveraging a grid architecture, Gridstore delivers the first end-to-end storage QoS to ensure the most important App or VM performance is never compromised. The storage grid, that uses Gridstore’s performance optimized nodes or capacity optimized nodes, starts with as few a...
The Transparent Cloud-computing Consortium (abbreviation: T-Cloud Consortium) will conduct research activities into changes in the computing model as a result of collaboration between "device" and "cloud" and the creation of new value and markets through organic data processing High speed and high quality networks, and dramatic improvements in computer processing capabilities, have greatly changed the nature of applications and made the storing and processing of data on the network commonplace. These technological reforms have not only changed computers and smartphones, but are also changing the data processing model for all information devices. In particular, in the area known as M2M (Machine-To-Machine), there are great expectations that information with a new type of value can be produced using a variety of devices and sensors saving/sharing data via the network and through large-scale cloud-type data processing. This consortium believes that attaching a huge number of devic...
Innodisk is a service-driven provider of industrial embedded flash and DRAM storage products and technologies, with a focus on the enterprise, industrial, aerospace, and defense industries. Innodisk is dedicated to serving their customers and business partners. Quality is vitally important when it comes to industrial embedded flash and DRAM storage products. That’s why Innodisk manufactures all of their products in their own purpose-built memory production facility. In fact, they designed and built their production center to maximize manufacturing efficiency and guarantee the highest quality of our products.
All major researchers estimate there will be tens of billions devices - computers, smartphones, tablets, and sensors - connected to the Internet by 2020. This number will continue to grow at a rapid pace for the next several decades. Over the summer Gartner released its much anticipated annual Hype Cycle report and the big news is that Internet of Things has now replaced Big Data as the most hyped technology. Indeed, we're hearing more and more about this fascinating new technological paradigm. Every other IT news item seems to be about IoT and its implications on the future of digital business.
Can call centers hang up the phones for good? Intuitive Solutions did. WebRTC enabled this contact center provider to eliminate antiquated telephony and desktop phone infrastructure with a pure web-based solution, allowing them to expand beyond brick-and-mortar confines to a home-based agent model. Download Slide Deck: ▸ Here
BSQUARE is a global leader of embedded software solutions. We enable smart connected systems at the device level and beyond that millions use every day and provide actionable data solutions for the growing Internet of Things (IoT) market. We empower our world-class customers with our products, services and solutions to achieve innovation and success. For more information, visit www.bsquare.com.
With the iCloud scandal seemingly in its past, Apple announced new iPhones, updates to iPad and MacBook as well as news on OSX Yosemite. Although consumers will have to wait to get their hands on some of that new stuff, what they can get is the latest release of iOS 8 that Apple made available for most in-market iPhones and iPads. Originally announced at WWDC (Apple’s annual developers conference) in June, iOS 8 seems to spearhead Apple’s newfound focus upon greater integration of their products into everyday tasks, cross-platform mobility and self-monitoring. Before you update your device, here is a look at some of the new features and things you may want to consider from a mobile security perspective.