|By Brian McCallion||
|January 2, 2013 08:45 AM EST||
Christmas Eve and the AWS/Netflix outage to me aren't so much about whether or not the Cloud is viable or scary or dangerous. Rather, the event resonated with users across the United States because the Cloud delivers so much utility to each of us. And regardless of who was at fault -- Netflix or Amazon Web Services, the event made it clear that there's no going back and that the Cloud has quickly become a part of our culture and our everyday lives. This is significant because while the Internet itself is a technology consumers have grown to love, Cloud is a way of delivering service that makes a service like Netflix streaming possible, and at a measly $8 bucks a month. The Netflix business model of delivering outsize utility for a low price point makes the business of streaming video all the more difficult. HBO, Cinemax, and the networks for me are unusable. I sense something beyond just making money remains in play at Netflix. Somewhere in that organization seems to beat a heart that quickens for humanity.
While much is made of the sparsity of the Netflix streaming catalog, keep in mind that streaming is the disruptor's second act. The Wicked Witch, aka Blockbuster is dead, I think in part because each of us desired a little bit of payback for all the times we returned videos late and were charged outrageous fees. Rather than punish Netflix Streaming for innovating, and for challenging the iron grip of the legacy media houses, I praise Netflix and personally admire the generous open source contributions Netflix has made to the Cloud Community.
I once tweeted that the value of the code and tools Netflix has shared on GitHub may well be larger than the value of the streaming business as a whole. That may be true for about five minutes. We're at a tipping point in streaming video media similar to that of the music business just before the iPod. Yet each evolution carries forward a little bit of spin from the last disruption. In this iteration all the players know the iPod playbook and so they are trying very very hard to fight the gravity of disruption. Having lost Blockbuster and effective control over the distribution of their content, the old guard hold onto to their catalog stubbornly hoping Netflix will disappear. But we all know how this movie ends. I see a much larger Netflix catalog of content on the horizon and substantial value for consumers. Once the old guard have been disintermediated and a more open, consumer friendly market for content prevails every Christmas will be a little brighter (that's kind of a stretch?). In my opinion what's in play today with respect to old media firms is not so different from the agency model that kept eBook prices artificially high for many consumers.
So what about the Global Load Balancer Already?
Most Cloud Architect and Solution Architects of high availability systems are familiar with Load Balancers. AWS designed a mostly well-intended and useful service around this technology and it's often referred to as the (infamous) ELB (Elastic Load Balancer). Functionally a "classic" load balancer distributes service requests across hosts grouped into a pool of largely identical servers. Spreading requests across multiple servers is a key capability required for the horizontal scale-out favored by Cloud and to a lesser degree Enterprise Architecture.
A Global Load Balancer operates one or more levels above the Local Load Balancer. Similar to a classic DNS server, the Global Load Balancer returns an ip address. Often the IP address is the address of a Local Load Balancer but could be configured to return a public or private ip address. The IP address or VIP returned is determined by the specific solution requirements. Because Global Load Balancer technology stands on the shoulders of a fundamental and ubiquitous technology like DNS, the service can be very resilient if configured that way. If LLB (Local Load Balancer) high-availability solution is designed correctly the Load Balancer monitors the health and responsiveness of a pool of servers and directs requests to nodes of an application. A Global Load Balancer performs a similar function, except at a data center level. A Global Load Balancer can detect when a Local Load Balancer is no longer available, and when this happens can return the ip address of a load balancer or endpoint in another data center anywhere in the world. As it is based on DNS technology the Global Load Balancer is not coupled to a specific network. Moreover, the top-level domain is likely not hosted in any Cloud, and so provides a degree of separation. Further, Global Load Balancers can determine the nearest endpoint for a specific user request, and can return an endpoint closest to a user. Global Load Balancers can be configured to monitor the latency of response times across a pool of endpoints. In times of network failure, the endpoint that is normally "closest" or "lowest latency" frequently becomes a very high latency endpoint. So the ability to monitor latency and to adapt how it responds to requests based on latency, enables a very fine grained capability to direct users around system and network anomalies such as those that continue to plague US-EAST-1.
My observation is simply this: why don't Cloud or Web firms take a lesson from classic availability architecture and use a load balancer to enable endpoints to fail across data centers. For enterprise, why make Cloud an either/or question. And why gamble with the availability of important systems when it's not so difficult (especially if you've figured out the persistence layer) to balance systems across multiple Clouds. Using such an architecture to enable rapid switching of endpoints to other data centers could enable an enterprise to run an application both in the Cloud and in the data center and to balance traffic across them not in the sense of the much hyped "Cloud Bursting" which seems to focus mostly on "bursty capacity" but rather as a strategy for mitigating risk and reducing the correlated risk of running Load Balancers and applications within a single specific Cloud, even in multiple availability zones (as in the case of AWS). Hurricane Sandy in New York City would have been far more disruptive if not for systems built using Global Load Balancers for high availability. While there's no comparison to the complexity of failing over critical, albeit smaller-scale platforms to Netflix, several non-web facing internal applications for which I designed the architecture in 2011-2012 failed over to alternate data centers simply by changing the endpoint returned by the Global Load Balancer. In the case of Netflix, the option to migrate all the data was not an option, yet the instances remained healthy while those impacted by the ELB failure received no requests. An alternate strategy for directing traffic, in hindsight, could have made a difference. It puzzles me to no end why given the extensive focus on Cloud failure modes, the AWS ELB remains a single point of failure for Netflix and many other applications running in AWS whether at modest or extreme scale.
I wonder if Netflix will select an additional Cloud in 2013 and in doing so create some real competition in the Cloud Service Provider space. After such a high profile failure (Christmas Eve, for Christ's Sake), I feel certain the decision has already been made. The impact of such a move would legitimize another Cloud. People who know these things tell me that 80% of Amazon Web Services capacity is in US-EAST-1. To me this suggest Amazon Web Services has fallen victim to it's own success. The nature of Clouds is such that as they grow bigger the network effect driven by the efficiency of locating data and compute capacity as close together as possible becomes overwhelming and the penalty for locating in another region, such as the West Coast, or another Cloud in another data center, becomes higher. While recently AWS has announced some capabilities to make it easier to migrate to the US-WEST-2 (Oregon region, which is priced about the same as US-EAST-1) such capabilities don't really seem to matter.
I spend a great deal of my time learning from web scale best practices such as those co-developed by firms like Netflix, Heroku, Pinterest, Google and redefine what people think they know about distributed computing.. Yet based on the theme of AWS re:invent, and my personal experience in Fortune 500 Cloud, 2012 may not only "not be the end of time and ancient calendars," but 2012 may mark the year when this thinking makes the leap and infects the DNA of Fortune 500 technology. The itchy little problem of Cloud as ready for big business remains the persistent failures, yet the risk of failures in hindsight can be minimized--and using technology commonly used by Enterprise that unlike Oracle RAC clusters and other High Availability technology works just fine in both Cloud and traditional data centers.
Listen closely to what Cloud and web scale practitioners have to teach. Architecture of Cloud applications deviates in ways that will make your database and application engineers pull-out their hair, scream, and storm out meetings (based on what I've seen, except for the hair pulling). For example the application server architecture and scale-out strategy for scaling applications in the Cloud is very different from how most of Fortune 500s do Enterprise Application Server clusters today. And after you fail in the Cloud trying to kick it old school, the enlightenment comes quickly. Yet at the same time, don't get too caught up in the hype and forget everything. The Global Load Balancer commonly used in large enterprise, when deployed effectively, could very well help you build applications which balance the new with the familiar.
Further Reading / Cultural Reception of Cloud
Some of the media reception of the events. I find the coverage to be grossly inaccurate, yet it's part of the cultural reception of the Cloud, so I've included some links:
‘The Cloud’ Challenges Amazon http://nyti.ms/ZBXT86
Heroku, if you were a character on South Park, I would call you Kenny.Every time AWS has an episode you're killed. bit.ly/U7pu9w
— Brian McCallion (@BrianMcCallion) December 25, 2012
IoT is still a vague buzzword for many people. In his session at Internet of @ThingsExpo, Mike Kavis, Vice President & Principal Cloud Architect at Cloud Technology Partners, will discuss the business value of IoT that goes far beyond the general public's perception that IoT is all about wearables and home consumer services. The presentation will also discuss how IoT is perceived by investors and how venture capitalist access this space. Other topics to discuss are barriers to success, what is new, what is old, and what the future may hold.
Oct. 1, 2014 09:45 PM EDT Reads: 1,685
The Internet of Things (IoT) is going to require a new way of thinking and of developing software for speed, security and innovation. This requires IT leaders to balance business as usual while anticipating for the next market and technology trends. Cloud provides the right IT asset portfolio to help today’s IT leaders manage the old and prepare for the new. Today the cloud conversation is evolving from private and public to hybrid. This session will provide use cases and insights to reinforce the value of the network in helping organizations to maximize their company’s cloud experience.
Oct. 1, 2014 09:45 PM EDT Reads: 896
Cultural, regulatory, environmental, political and economic (CREPE) conditions over the past decade are creating cross-industry solution spaces that require processes and technologies from both the Internet of Things (IoT), and Data Management and Analytics (DMA). These solution spaces are evolving into Sensor Analytics Ecosystems (SAE) that represent significant new opportunities for organizations of all types. Public Utilities throughout the world, providing electricity, natural gas and water, are pursuing SmartGrid initiatives that represent one of the more mature examples of SAE. We have spoken with, or attended presentations from, utilities in the United States, South America, Asia and Europe. This session will provide a look at the CREPE drivers for SmartGrids and the solution spaces used by SmartGrids today and planned for the near future. All organizations can learn from SmartGrid’s use of Predictive Maintenance, Demand Prediction, Cloud, Big Data and Customer-facing Dashboards...
Oct. 1, 2014 09:45 PM EDT Reads: 718
Whether you're a startup or a 100 year old enterprise, the Internet of Things offers a variety of new capabilities for your business. IoT style solutions can help you get closer your customers, launch new product lines and take over an industry. Some companies are dipping their toes in, but many have already taken the plunge, all while dramatic new capabilities continue to emerge. In his session at Internet of @ThingsExpo, Reid Carlberg, Senior Director, Developer Evangelism at salesforce.com, to discuss real-world use cases, patterns and opportunities you can harness today.
Oct. 1, 2014 08:30 PM EDT Reads: 2,130
All major researchers estimate there will be tens of billions devices – computers, smartphones, tablets, and sensors – connected to the Internet by 2020. This number will continue to grow at a rapid pace for the next several decades. With major technology companies and startups seriously embracing IoT strategies, now is the perfect time to attend @ThingsExpo in Silicon Valley. Learn what is going on, contribute to the discussions, and ensure that your enterprise is as "IoT-Ready" as it can be!
Oct. 1, 2014 05:00 PM EDT Reads: 2,366
Noted IoT expert and researcher Joseph di Paolantonio (pictured below) has joined the @ThingsExpo faculty. Joseph, who describes himself as an “Independent Thinker” from DataArchon, will speak on the topic of “Smart Grids & Managing Big Utilities.” Over his career, Joseph di Paolantonio has worked in the energy, renewables, aerospace, telecommunications, and information technology industries. His expertise is in data analysis, system engineering, Bayesian statistics, data warehouses, business intelligence, data mining, predictive methods, and very large databases (VLDB). Prior to DataArchon, he served as a VP and Principal Analyst with Constellation Group. He is a member of the Boulder (Colo.) Brain Trust, an organization with a mission “to benefit the Business Intelligence and data management industry by providing pro bono exchange of information between vendors and independent analysts on new trends and technologies and to provide vendors with constructive feedback on their of...
Oct. 1, 2014 03:30 PM EDT Reads: 974
Software AG helps organizations transform into Digital Enterprises, so they can differentiate from competitors and better engage customers, partners and employees. Using the Software AG Suite, companies can close the gap between business and IT to create digital systems of differentiation that drive front-line agility. We offer four on-ramps to the Digital Enterprise: alignment through collaborative process analysis; transformation through portfolio management; agility through process automation and integration; and visibility through intelligent business operations and big data.
Sep. 30, 2014 10:30 AM EDT Reads: 1,571
There will be 50 billion Internet connected devices by 2020. Today, every manufacturer has a propriety protocol and an app. How do we securely integrate these "things" into our lives and businesses in a way that we can easily control and manage? Even better, how do we integrate these "things" so that they control and manage each other so our lives become more convenient or our businesses become more profitable and/or safe? We have heard that the best interface is no interface. In his session at Internet of @ThingsExpo, Chris Matthieu, Co-Founder & CTO at Octoblu, Inc., will discuss how these devices generate enough data to learn our behaviors and simplify/improve our lives. What if we could connect everything to everything? I'm not only talking about connecting things to things but also systems, cloud services, and people. Add in a little machine learning and artificial intelligence and now we have something interesting...
Sep. 29, 2014 06:45 AM EDT Reads: 1,912
Last week, while in San Francisco, I used the Uber app and service four times. All four experiences were great, although one of the drivers stopped for 30 seconds and then left as I was walking up to the car. He must have realized I was a blogger. None the less, the next car was just a minute away and I suffered no pain. In this article, my colleague, Ved Sen, Global Head, Advisory Services Social, Mobile and Sensors at Cognizant shares his experiences and insights.
Sep. 28, 2014 09:45 AM EDT Reads: 1,558
We are reaching the end of the beginning with WebRTC and real systems using this technology have begun to appear. One challenge that faces every WebRTC deployment (in some form or another) is identity management. For example, if you have an existing service – possibly built on a variety of different PaaS/SaaS offerings – and you want to add real-time communications you are faced with a challenge relating to user management, authentication, authorization, and validation. Service providers will want to use their existing identities, but these will have credentials already that are (hopefully) irreversibly encoded. In his session at Internet of @ThingsExpo, Peter Dunkley, Technical Director at Acision, will look at how this identity problem can be solved and discuss ways to use existing web identities for real-time communication.
Sep. 27, 2014 11:30 PM EDT Reads: 1,936
Can call centers hang up the phones for good? Intuitive Solutions did. WebRTC enabled this contact center provider to eliminate antiquated telephony and desktop phone infrastructure with a pure web-based solution, allowing them to expand beyond brick-and-mortar confines to a home-based agent model. It also ensured scalability and better service for customers, including MUY! Companies, one of the country's largest franchise restaurant companies with 232 Pizza Hut locations. This is one example of WebRTC adoption today, but the potential is limitless when powered by IoT. Attendees will learn real-world benefits of WebRTC and explore future possibilities, as WebRTC and IoT intersect to improve customer service.
Sep. 27, 2014 10:30 PM EDT Reads: 1,846
From telemedicine to smart cars, digital homes and industrial monitoring, the explosive growth of IoT has created exciting new business opportunities for real time calls and messaging. In his session at Internet of @ThingsExpo, Ivelin Ivanov, CEO and Co-Founder of Telestax, will share some of the new revenue sources that IoT created for Restcomm – the open source telephony platform from Telestax. Ivelin Ivanov is a technology entrepreneur who founded Mobicents, an Open Source VoIP Platform, to help create, deploy, and manage applications integrating voice, video and data. He is the co-founder of TeleStax, an Open Source Cloud Communications company that helps the shift from legacy IN/SS7 telco networks to IP-based cloud comms. An early investor in multiple start-ups, he still finds time to code for his companies and contribute to open source projects.
Sep. 27, 2014 10:30 PM EDT Reads: 2,312
The Internet of Things (IoT) promises to create new business models as significant as those that were inspired by the Internet and the smartphone 20 and 10 years ago. What business, social and practical implications will this phenomenon bring? That's the subject of "Monetizing the Internet of Things: Perspectives from the Front Lines," an e-book released today and available free of charge from Aria Systems, the leading innovator in recurring revenue management.
Sep. 27, 2014 09:45 PM EDT Reads: 2,532
The Internet of Things will put IT to its ultimate test by creating infinite new opportunities to digitize products and services, generate and analyze new data to improve customer satisfaction, and discover new ways to gain a competitive advantage across nearly every industry. In order to help corporate business units to capitalize on the rapidly evolving IoT opportunities, IT must stand up to a new set of challenges.
Sep. 27, 2014 08:45 PM EDT Reads: 2,404
There’s Big Data, then there’s really Big Data from the Internet of Things. IoT is evolving to include many data possibilities like new types of event, log and network data. The volumes are enormous, generating tens of billions of logs per day, which raise data challenges. Early IoT deployments are relying heavily on both the cloud and managed service providers to navigate these challenges. In her session at 6th Big Data Expo®, Hannah Smalltree, Director at Treasure Data, to discuss how IoT, Big Data and deployments are processing massive data volumes from wearables, utilities and other machines.
Sep. 27, 2014 01:00 PM EDT Reads: 2,074
P2P RTC will impact the landscape of communications, shifting from traditional telephony style communications models to OTT (Over-The-Top) cloud assisted & PaaS (Platform as a Service) communication services. The P2P shift will impact many areas of our lives, from mobile communication, human interactive web services, RTC and telephony infrastructure, user federation, security and privacy implications, business costs, and scalability. In his session at Internet of @ThingsExpo, Erik Lagerway, Co-founder of Hookflash, will walk through the shifting landscape of traditional telephone and voice services to the modern P2P RTC era of OTT cloud assisted services.
Sep. 26, 2014 11:45 PM EDT Reads: 1,594
While great strides have been made relative to the video aspects of remote collaboration, audio technology has basically stagnated. Typically all audio is mixed to a single monaural stream and emanates from a single point, such as a speakerphone or a speaker associated with a video monitor. This leads to confusion and lack of understanding among participants especially regarding who is actually speaking. Spatial teleconferencing introduces the concept of acoustic spatial separation between conference participants in three dimensional space. This has been shown to significantly improve comprehension and conference efficiency.
Sep. 26, 2014 10:45 PM EDT Reads: 1,519
The Internet of Things is tied together with a thin strand that is known as time. Coincidentally, at the core of nearly all data analytics is a timestamp. When working with time series data there are a few core principles that everyone should consider, especially across datasets where time is the common boundary. In his session at Internet of @ThingsExpo, Jim Scott, Director of Enterprise Strategy & Architecture at MapR Technologies, will discuss single-value, geo-spatial, and log time series data. By focusing on enterprise applications and the data center, he will use OpenTSDB as an example to explain some of these concepts including when to use different storage models.
Sep. 26, 2014 07:45 PM EDT Reads: 2,333
SYS-CON Events announced today that Gridstore™, the leader in software-defined storage (SDS) purpose-built for Windows Servers and Hyper-V, will exhibit at SYS-CON's 15th International Cloud Expo®, which will take place on November 4–6, 2014, at the Santa Clara Convention Center in Santa Clara, CA. Gridstore™ is the leader in software-defined storage purpose built for virtualization that is designed to accelerate applications in virtualized environments. Using its patented Server-Side Virtual Controller™ Technology (SVCT) to eliminate the I/O blender effect and accelerate applications Gridstore delivers vmOptimized™ Storage that self-optimizes to each application or VM across both virtual and physical environments. Leveraging a grid architecture, Gridstore delivers the first end-to-end storage QoS to ensure the most important App or VM performance is never compromised. The storage grid, that uses Gridstore’s performance optimized nodes or capacity optimized nodes, starts with as few a...
Sep. 26, 2014 06:15 PM EDT Reads: 1,711
The Transparent Cloud-computing Consortium (abbreviation: T-Cloud Consortium) will conduct research activities into changes in the computing model as a result of collaboration between "device" and "cloud" and the creation of new value and markets through organic data processing High speed and high quality networks, and dramatic improvements in computer processing capabilities, have greatly changed the nature of applications and made the storing and processing of data on the network commonplace. These technological reforms have not only changed computers and smartphones, but are also changing the data processing model for all information devices. In particular, in the area known as M2M (Machine-To-Machine), there are great expectations that information with a new type of value can be produced using a variety of devices and sensors saving/sharing data via the network and through large-scale cloud-type data processing. This consortium believes that attaching a huge number of devic...
Sep. 26, 2014 06:00 PM EDT Reads: 1,626