Skip to content

Compete Against Cisco, and You Might Just Win

October 24, 2012

Cisco CEO, John Chambers, has attacked most of his Silicon Valley neighbors lately, including some of his strongest technology partners. Chambers recently boasted to CRN: “when we compete, we don’t lose.” Chambers supported his comment with some numbers and trends that rate as questionable to erroneous. Consider his Avaya attack regarding telephony: “Avaya, who was going to out-execute us in collaboration, make no mistake: they are struggling. We’re beating them very bad.” If Cisco is crushing Avaya, why did Avaya gain 3 percentage points on Cisco last year as Forbes reported: “Cisco Sees Avaya Getting Bigger in Rearview Mirror.”

Additionally, Chambers bases Cisco’s market dominance on 65% of the telephony market. That is quite a claim considering Synergy Research Group reported a much more competitive telephony landscape (Cisco: 30%, Avaya: 22%). 

On to Cisco’s overall competitiveness. If Cisco cannot lose, why does it keep leaving markets? Cisco exited the load balancing market a few months ago. Roughly a year earlier, Cisco decided energy management failed to suit its portfolio. Just prior to that exit, Flip Video no longer satisfied Cisco’s appetite. I could go on, but I think my point is made: Cisco’s presence in a market should not scare away competitors.

Cisco certainly should not be ignored. When it comes to networking, Cisco remains king of the castle (as far as market share goes). However, Cisco could easily be on the losing side of Chambers’ arguing points if looked at from the right angle (market share, lay-offs, share price, etc.).   

Avaya Announces Collaboration Pods

September 7, 2012

     Last month, I argued that the difficulty selling vanilla converged infrastructure stems from design without an application focus. To analogize, dynamite is unnecessary to tear down the shed in your backyard. On the flip side, a sledge hammer won’t help you knock down a skyscraper. Last week, Avaya announced its converged infrastructure (Collaboration Pods) and hit the nail on the head. Avaya, who specializes in applications, announced an array of converged infrastructure solutions laser-focused on applications. First, Avaya virutalized applicatioins (e.g. contact center, unified communications, virtual desktop, etc.), and then selected/built the hardware environment to adequately support the applications. Avaya’s approach to converged infrastructure is appropriate, thought through, and demands attention from end users and resellers alike.

Which Cloud Storage Provider is for Me: Vol. 1

August 13, 2012
tags: ,

Recently, Steve Wozniak (The “Woz”) expressed fears about everything moving to the cloud at its current rate: “I really worry about everything going to the cloud….I think it’s going to be horrendous. I think there are going to be a lot of horrible problems in the next five years.” In some ways, I completely agree. At the same time, I don’t think any problem cloud computing causes, no matter how “horrendous” or “horrible” will stop the transition to the cloud (I would challenge that no problem will even slow the movement down). Why do I dare brush off the beloved Woz’s prediction? I assert my prediction based on the same reason I hate the world’s worst sales pitch that never works: “You should buy my product because we created this product years before the competition.”

I agree, as more and more applications (especially mission critical applications) move to the cloud, problems will arise. We have already seen a fair share of cloud outages that have shaken the world’s perception of cloud as the next great technological movement. However, consider the method by which the next generation interacts with data and with each other. For instance, my three year old son recently rebuked his grandmother for suggesting that they watch his favorite Curious George on the larger screen TV as opposed to her iPad. My son preferred the iPad purely because of its cloud-essential benefits: 1) he could choose which show he wanted, and switch from show to show (on-demand, self service) 2) he eliminated the need to put a disc in a DVD player and wait for it to load (ubiquitous network access & resource pooling) 3) he had access to all the shows he wanted (rapid elasticity). Will this model fail him because of the “horrible” problems that might arise? Sure. However, he will simply change to another provider that will deliver him cloud enabled data in a more efficient/acceptable manner. Providers will learn from the cloud outages today and adjust. Those who don’t continue to meet the needs won’t push technology backwards to the pre-cloud days. More logically, new cloud providers arise that fix the problems.

Consider the state of Palm.  Palm built the first internet enabled PDA in 1997. Does anyone bother arguing that a smartphone shopper should buy Palm over an Android or Apple device? No chance. Palm was first to market, but it failed to keep up to speed with the demand of users. Did that mean the internet-enable PDA went away? No, other companies simply made it better. The same phenomenon will occur with cloud, because technology always works this way.

Palm built the first internet capabile PDA in 1997

Now that we know cloud isn’t leaving, despite its current inefficiencies, let’s address a question I hear on a daily basis: “What cloud storage provider should I use?” Well, that is a complicated question (a cop-out answer to most questions). It is truly a complicated question, but one that I will take the time to answer, in later post, based on the answers to questions that make the question complicated. Stay tuned…

Wrong Tool for the Job: the Need for Application-focused Infrastructure

August 3, 2012

 

Since Cisco, Intel, EMC, and VMware rolled out Vblock a few years ago; converged architecture has exploded…in theory. Converged architectures are a great idea; after all, integrating servers, storage, and switching takes IT departments a great deal of time and money. Who wouldn’t want a plug and play data center? We are now two or three years into the converged architecture movement, and the hype continues. However, before jumping on the converged architecture bandwagon, read the fine print and make sure you are investing in, or selling the correct tool for the job.

A recent Wikibon report claims the converged infrastructure market will hit $402 billion by 2017. Seeing as the market was born a little over two years ago, it seems like a winner on its face. However, the report includes “reference architecture” in its definition of converged infrastructure. By 2017, the report estimates that reference architecture will outsell single SKU solutions in this space by about 75%. In fact, the report also estimates that legacy, single application architecture will continue to outsell single SKU solutions by about 30%. What does this mean? It means the fully converged Vblock (VCE), vStart (Dell), VirtualSystem (HP), and other truly converged systems aren’t taking as their creators thought they would. In fact, EMC recently announced a reference architecture (VSPEX), which seems to question the validity of its original converged infrastructure (Vblock).

The problem with single SKU, converged infrastructure goes to a data center’s purpose. Not all data centers serve the same purpose, use the same applications, serve the same users; therefore, not all data centers require the same infrastructure from a hardware and software standpoint. Of all the infrastructure an IT department manages, the data center requires the most flexibility and customization. Accordingly, five versions of a converged infrastructure won’t serve the needs of every data center across the planet. The answer lies in application-focused infrastructure.

Application-focused infrastructure starts with the users and applications, and builds the infrastructure to serve those applications. Application -focused infrastructure goes beyond mere reference architecture, because integration actually occurs (preserving plug and play nature);  however, it serves the needs of a data center’s purpose. For instance, a hosted VOIP data center requires different architecture than a hosted CRM application. Manufacturers can still offer converged solutions; however, the infrastructure must meet the needs of the VOIP or CRM application.

The theory behind converged infrastructure is great. It saves money and time to market for data center players. However, unless the infrastructure meets the needs of the application, converged infrastructure can’t serve a broad market and will never reach its expectation as a single SKU solution.

Still Money to be had in Networking Hardware

June 27, 2012

Hardware technology vendors constantly battle the “inevitable” commoditization of their products. Think about the massive price drops of PCs in the 1990s or servers soon thereafter. For years now, we have been warned about the commoditzation of networking equipment. When Cisco rose to dominate the market, the functionality of networking equipment took a back seat to price. However, a closer look at the numbers suggests that networking equipment has not commoditized like PCs and servers; and the value of innovative network technology still demands a premium.

To examine the numbers, consider three manufacturers that are the closest to “pure” networking vendors (i.e. Brocade, Extreme, and Juniper). Over the past ten years, each company’s gross margin has remained relatively stable, or has actually increased.

Brocade Communications

Extreme Networks

Juniper Networks

What’s the point? The point is simple; networking vendors are still innovating and offering new value to their partner and end user base. Networking equipment must continue to advance to support the Web 2.0 community and the rise of cloud computing. Perhaps the customer profile changes for networking resellers, but the need for feature-rich networking equipment remains vital. Cloud computing and hosted applications do not remove the need for hardware; it simply shifts its location (in some cases). The steady gross margin of networking vendors explains that partners can still demand a profit and don’t need to go to war on price to provide continued value to end users. The developing technology is impressive and resellers should take advantages of the opportunities.

 

YOUR Network’s Hurting MY Business!

June 8, 2012

In the early days of the internet, websites were pretty independent. If a website suffered latency problems or endured an outage, the problems were typically isolated to a single company. That was then, this is now…an evermore connected now. Last Friday, Facebook had a “bad day” according to a Facebook IT. However, Facebook’s “bad day” spread across the web like a pandemic. The top twenty news sites across the globe saw page loads slow from the usual 7 .5 seconds to 12.5 seconds. The top 60 retail sites, where faster load times is directly linked to more sales, page load times were more than twice as slow (5.7 seconds compared to the normal 2.2 seconds).  How did Facebook affect industries around the world? The “like” button.

The like button is a small, isolated example of how connected the Web 2.0 world is. However, a like button issue is only a glimpse of the damage that can be done. Many companies, across nearly all industries, are heavily dependent on their web presence. In today’s marketplace, an ineffective web presence is simply unacceptable. In fact most companies, regardless of industry, are generating revenue in some capacity via their web presence. But, a sharp, user-friendly web presence is useless without a highly available network.

Whether its API calls, EDI feeds, or social media connecting multiple parties together, LANs & WANs are constantly communicating (internally and externally). Any glitch in your network can bring you down, but it will inevitably bring your neighbors down as well (whether the neighbor is in China, the US, or somewhere between). Whether we like it our not, a poor network will push customers, partners, investors, and leads away.

 

How is the DDI Market so Large When Nobody Knows What it is?

June 1, 2012

Infoblox is the clear leader in the DDI space. Accordingly, they should be the go to vendor for your DDI needs, correct? Unfortunately, very few know they have DDI needs. It would help to identify what DDI is. Gartner created the acronym to represent the DNS, DHCP, and IP Address Management (DDI) market. Now it is much clearer right? Probably not. Just identifying the three elements of DDI does not uncover any needs in the three areas. So, how do you sell into the DDI space? Try taking technology out of the picture. Try the following questions. Would you like to take the job of four employees and consolidate it into one? Would you like to take an IT process that has traditionally taken weeks to accomplish and reduce it to ten minutes? Anybody, regardless of technical savvy, can affirmatively answer those questions.

 

Infoblox clearly dominates the DDI marketplace (see Gartner’s report). As networks expand, IPv4 addresses run out, and manual processes to monitor/change network devices overwhelm IT departments, the DDI market will rise to a household name. No matter what niche of the IT environment you sell into (e.g. wireless, network, voice, etc.) and no matter what vertical your customers sit in (e.g. healthcare, retail, education, etc.), there is a good chance DDI already presents a cumbersome ball and chain for IT departments. Infoblox’s suite of DDI products immediately ease network processes; in turn, create immediate savings.

 

From Trees to Pancakes: Extreme Network’s BD X8 Flattens the Network to One Tier

May 14, 2012

 

 

 

 

With all the bad press Spanning Tree Protocol (STP) receives these days, it is difficult to believe that it was actually created to solve a problem. Although it was created in 1985, the IEEE published the first standard in 1990. When it became a standard, STP was supposed to encourage multi-vendor network compatibility, provide redundant links, and eliminate loops from a network. Today, network vendors are working as fast as they can to eliminate layers (eliminate the tree). Presently, you can’t play in the networking big leagues unless you have reduced your network offering to a two-tiered topology. Two tiers has become so standard that it’s almost yesterday’s news. However, the ever elusive “flat network” made of a single layer has only been accomplished logically, not physically. Virtual ports have allowed multiple stacks to talk to each other as a single device, but physically speaking, the top of rack switch still prevails. Any virtualization enabled top of rack switch of decent capacity is not cheap.

Although the single tier (physically speaking) continues to be a dream for the main stream data center, Extreme Networks may have discovered it with the capabilities of the Black Diamond X8. The X8 sits in the core and scales to 768 ports of 10GbE or 192 ports of 40 GbE per box. This density is unmatched by its competitors. The density also opens up the possibility of a single tier network (unlike its competitors). With this scalability, communication can be completely handled within the box; no intelligence is needed in the top of rack switch.

A single box, for an entire network…sounds like a cabling nightmare? Without a top of rack switch, it probably is a cabling nightmare (perhaps this is why a single layer is not widely advertised). However, consider a high density UTP system (cable management) as a replacement to a top of rack switch. These CAT systems scale to a dense level capable of organizing the cabling disaster that a maxed out X8 would bring absent a top of rack switch. In the end the UTP physically sits where the top of rack switch once did, but is a fraction of the cost (roughly a couple hundred dollars as opposed to a 7-10k top of rack switch).

All network vendors are working on flatter networks. Most are referring to it as a “fabric” strategy and have currently settled for a two-tier network that acts a single tier. With the X8, Extreme might have physically flattened the network to a single tier. Forget about the foggy “fabric” strategy that exists as one tier in logic, but two tiers in physical topology. Take care of the cables with a UTP system, and Extreme has the first pancake-flat network!

Simplified Management: The Key to Success in a Wired/Wireless Environment

March 9, 2012

 

The bring your own device (BYOD) revolution is fully upon us and growing. Gartner estimates that the consumerization of IT will be the most significant trend affecting the IT industry over the next ten years. Although we generally accept Gartner’s Magic Quadrants as truth, and rethink our entire IT strategies when a new Gartner industry report is released, let us consider what is happening at the nuts bolts level of actual networks before with jump on the BYOD explosion bandwagon. In line with Gartner’s prediction, a recent survey showed that 40% of IT administrators manage their networks from mobile devices. When the guys and gals who keep your business alive at the IT level are keeping businesses running, 24×7, from a consumer-level device, the trend is real. Accordingly, the BYOD revolution is alive and well, and the rumors are valid.

Now that the trend is verified, manufacturers have taken a number of approaches to the trend. Traditional switch manufacturers have moved into the wireless space (e.g. Cisco, Juniper Networks, Extreme Networks, etc.). On the other hand, wireless manufacturers have tried to expand their growing mindshare and creep into the traditional switch space (e.g. Aruba Networks, BlueSocket). Given the IT administrator that is starting to manage the network from a wireless device, both moves seem natural and timely. So, who will win the battle? Network administrators are most comfortable with their switch providers, as wired networks have been the dominate network setup for years. However, end users are demanding more and more mobility and greater access to their applications without a wired connection. To predict the winner of the switch versus wireless vendor battle, look not to the legacy businesses of these particular manufacturers. Rather, consider the management of a wired/wireless environment.

Traditionally, managing a wired environment was of utmost importance. 24×7 uptime was a requirement, and quick user access was in high demand. At the same time, wireless access was a luxury that could afford to be down. Today, wireless access is equally important to users; accordingly, 24×7 access is demanded. Wired networks are not going away, but wireless networks are becoming equally important. The key to succeeding in such an environment surrounds management. If managing the wired environment and wireless environment requires separate management systems, and separate management skill sets, the IT staff must grow and a new layer of IT conflict develops. While IT budgets continue to tighten, increased staff to manage disparate systems is not often an option. A single management system across a wired and wireless environment allows a single group of IT workers, and a single skill set to manage the entire wired and wireless environment. How does a manufacturer develop a single management strategy across wired and wireless environments? There are multiple strategies.

First, a manufacturer can build its own infrastructure to expand into a new market. For instance, Avaya (since the Nortel acquisition) has a time-tested robust switch portfolio. However, like many of its competitors, Avaya made a move into the wireless space. To enter the wireless space, Avaya developed a homegrown wireless portfolio. Because Avaya developed its own wireless portfolio, it was able to build the infrastructure with its own management features that were built into the legacy switch base. Because the same management capabilities are used for the wired and wireless products, Avaya took it a step further and included wireless and wired capabilities in the same appliance. This “single pane of glass” management strategy makes logical sense and time will tell if it succeeds in the quickly converging wired-wireless network.

Another approach, taken by Juniper Networks, is to acquire a company to fill the product gap and adapt the acquired portfolio to run a common management system across the wired and wireless product lines. Juniper acquired Trapeze Networks to enter the wireless market. However, Trapeze was built on an entirely different operating system and required different management tools. To simplify network management, Juniper reconfigured the Trapeze hardware to run its Junos operating system (an industry leader in the wired network space) and now manages the wireless products with a Junos based management tool.

It is yet to be seen who will dominate the networking space as wired and wireless networks converge and the demand for both network architectures become of equal importance. However, both wired and wireless must be managed, and a single approach to managing both environments will greatly simplify IT’s duties and have a leg up on all its competitors.

Avaya’s Starbucks Approach to the Network

February 15, 2012

Coffee is a popular commodity whose supply far outweighs demand, is available everywhere from gas stations to five star restaurants, and the typical consumer cannot decipher the subtle differences in taste from brand to brand. Given coffee’s broad availability and extremely low price, how can Starbucks continually attract buyers away from competitors while charging four to five times as much for a shot of caffeine? The answer lies in Starbucks’ focus—which isn’t coffee—the focus is customer experience. Consumers flock to Starbucks for the experience, not the product itself. Avaya has followed Starbucks’ example in its network design. Instead of concentrating on speeds and feeds, Avaya understands that next generation networks must respond and answer to a single authority: the user.

Most network vendors are rapidly developing a “fabric” strategy that simplifies the data center to allow the expansion of cloud computing and drives more data center consolidation. In some ways, Avaya is no different—VENA is a network architecture that embraces virtualization and increases network efficiency within the data center through the use of Shortest Path Bridging (SPB). However, Avaya’s VENA strategy and use of SPB has taken the network beyond where most vendors are going with their various fabric approaches. Most fabrics are isolated to the data center. Vendors are achieving unprecedented speeds from rack to rack within the data center (the removal of Spanning Tree Protocol and the move towards 10G Ethernet have accelerated this movement). However, most vendors have concentrated on increasing the speed of the data center from an “intra” data center perspective. Before you buy into any fabric’s capabilities, consider the broader implications.

First, consider what is housed in data centers. Sure, your traditional suspects still lie in the data center: databases, archives, etc. However, the newcomers to the data center are what users care about: user applications. A user application housed in the data center is whole premise of cloud computing. A user no longer runs the application from his edge device; rather, he accesses the application from whatever edge device he desires from wherever he is located. This usage model begs the question: who cares how efficient your data center is if your campus, branch, and remote workers can’t access the applications with the same efficiency? Unlike many network vendors, Avaya has addressed this question.

Avaya uses the same VENA architecture and SPB protocol from the core of the data center, to the edge of the network. Avaya understands that in order to take advantage of increased data center efficiency, the same single management plane and high availability must be available clear to the edge. It must be available at the edge, because the edge user is using applications that exist in the data center. Most fabrics concentrate on the data center and leave campus, branch, and remote sites to more traditional network architectures. This network approach is like flying home from a trip on aConcordjet, but then having to walk the twenty mile journey from the airport your house. The quick plane ride is offset by the amount of time it takes you to travel the last leg of your trip.

Avaya designed its network strategy not with the network itself in mind. Rather, Avaya knows that the next generation network must serve a single master: the user. The user wants access to his application. Applications are moving more and more to the data center. The end user doesn’t care about the network, and he shouldn’t have to. The user simply wants access to his application. With that in mind, Avaya has simplified the entire network: core to edge with a single, robust network strategy.