Welcome!

IBM Cloud Authors: Yeshim Deniz, Elizabeth White, Pat Romanski, Liz McMillan, Stefan Bernbo

Blog Feed Post

General Purpose GPUs for Network Analytics

So, a big deal was made about a paper from some university researchers somewhere (source here) noting the efficacy of using GPGPUs for Network Analysis (obstensibly as a

correlation sieve for all the streams of information being presented).  While this is a GREAT paper, it neglects a few critical points that someone who’s a bit more familiar with the platforms might pick up.

  1. the rise of APUs.
  2. Power consumption and Thermals
  3. Form factor

Let’s tackle these individually (and yes, in a cursory manner):

APUs

The rise of integrated CPU and GPU resources is nothing new.  For the longest time, the idea of sub-par graphics capabilities coupled to a moderately powered CPU has been the ideal of SMB and “Home” based computing.  Ease of integration and application were the norm.  With the advance of more powerful GPUs, however, the ability to run onboard (albeit restricted) graphics architectures like GCN or IRIS in AMD and Intel processors (respectively) have given rise to the idea where the processing complex can do more with less real estate. (which will be point #3).  The integration of eDRAM on-die (in the case of Iris Pro, for example) bring platform bandwidth (something required to feed the typically starved pipelines of GPUs by way of efficiency) directly to the most needed point of access.  This reduces the latency/hops of having to traverse the PCIe bus.  What’s been missing, however, has been full hardware integration where common resources (e.g. system DRAM) can be accessed in a completely fluid model. For reference see AMD’s discussion about hQ (heterogeneous queuing) and equal CPU/GPU dispatch.

Power Consumption and Thermals

Unlike discrete GPUs with their sometimes-massive power consumption requirements (~80w-210w differential) and  dedicated power drops from the PSU (thus increasing complexity of system design…think OpenCompute), APUs are able to provide a consistent thermal and power envelope that can be planned into the system design without requiring excess cost.  The downside of this design is the nature of the GPU core (usually a mid-range and generation behind; Kaveri being an exception) and not being as brute force powerful as, let’s say, current generation parts (nVidia Titan, AMD R9, etc.).  Then again, is all that capability needed?

Form Factor

This is somewhat answered in the previous section but given that most servers would want a limited footprint in terms of RU (2RU being common for SIEM appliances) and that disk and network I/O tend to consume most of the slot resources, it would be somewhat of a challenge to fit a 2-slot GPU (with the full-length to match) into a server and maintain power, thermal, and I/O consistency.  By utilizing APUs, “thin” 1RU Twin Servers (a la Supermicro or even Dell’s excellent VRTX) could be harnessed without excessive rebuilding times and additional power and cooling considerations.

Conclusion

Just some thoughts down on paper but I’m liking the direction things are going, specific to networking. Thoughts welcome.

Enhanced by Zemanta

Read the original blog entry...

More Stories By Dave Graham

Dave Graham is a Technical Consultant with EMC Corporation where he focused on designing/architecting private cloud solutions for commercial customers.

IoT & Smart Cities Stories
Apps and devices shouldn't stop working when there's limited or no network connectivity. Learn how to bring data stored in a cloud database to the edge of the network (and back again) whenever an Internet connection is available. In his session at 17th Cloud Expo, Ben Perlmutter, a Sales Engineer with IBM Cloudant, demonstrated techniques for replicating cloud databases with devices in order to build offline-first mobile or Internet of Things (IoT) apps that can provide a better, faster user e...
In his keynote at 19th Cloud Expo, Sheng Liang, co-founder and CEO of Rancher Labs, discussed the technological advances and new business opportunities created by the rapid adoption of containers. With the success of Amazon Web Services (AWS) and various open source technologies used to build private clouds, cloud computing has become an essential component of IT strategy. However, users continue to face challenges in implementing clouds, as older technologies evolve and newer ones like Docker c...
The Founder of NostaLab and a member of the Google Health Advisory Board, John is a unique combination of strategic thinker, marketer and entrepreneur. His career was built on the "science of advertising" combining strategy, creativity and marketing for industry-leading results. Combined with his ability to communicate complicated scientific concepts in a way that consumers and scientists alike can appreciate, John is a sought-after speaker for conferences on the forefront of healthcare science,...
Disruption, Innovation, Artificial Intelligence and Machine Learning, Leadership and Management hear these words all day every day... lofty goals but how do we make it real? Add to that, that simply put, people don't like change. But what if we could implement and utilize these enterprise tools in a fast and "Non-Disruptive" way, enabling us to glean insights about our business, identify and reduce exposure, risk and liability, and secure business continuity?
To Really Work for Enterprises, MultiCloud Adoption Requires Far Better and Inclusive Cloud Monitoring and Cost Management … But How? Overwhelmingly, even as enterprises have adopted cloud computing and are expanding to multi-cloud computing, IT leaders remain concerned about how to monitor, manage and control costs across hybrid and multi-cloud deployments. It’s clear that traditional IT monitoring and management approaches, designed after all for on-premises data centers, are falling short in ...
"The Striim platform is a full end-to-end streaming integration and analytics platform that is middleware that covers a lot of different use cases," explained Steve Wilkes, Founder and CTO at Striim, in this SYS-CON.tv interview at 20th Cloud Expo, held June 6-8, 2017, at the Javits Center in New York City, NY.
"MobiDev is a Ukraine-based software development company. We do mobile development, and we're specialists in that. But we do full stack software development for entrepreneurs, for emerging companies, and for enterprise ventures," explained Alan Winters, U.S. Head of Business Development at MobiDev, in this SYS-CON.tv interview at 20th Cloud Expo, held June 6-8, 2017, at the Javits Center in New York City, NY.
The deluge of IoT sensor data collected from connected devices and the powerful AI required to make that data actionable are giving rise to a hybrid ecosystem in which cloud, on-prem and edge processes become interweaved. Attendees will learn how emerging composable infrastructure solutions deliver the adaptive architecture needed to manage this new data reality. Machine learning algorithms can better anticipate data storms and automate resources to support surges, including fully scalable GPU-c...
As IoT continues to increase momentum, so does the associated risk. Secure Device Lifecycle Management (DLM) is ranked as one of the most important technology areas of IoT. Driving this trend is the realization that secure support for IoT devices provides companies the ability to deliver high-quality, reliable, secure offerings faster, create new revenue streams, and reduce support costs, all while building a competitive advantage in their markets. In this session, we will use customer use cases...
Machine learning has taken residence at our cities' cores and now we can finally have "smart cities." Cities are a collection of buildings made to provide the structure and safety necessary for people to function, create and survive. Buildings are a pool of ever-changing performance data from large automated systems such as heating and cooling to the people that live and work within them. Through machine learning, buildings can optimize performance, reduce costs, and improve occupant comfort by ...