Moor Insights & Strategy performs three different kinds of research:
- Non-commissioned, public papers and notes: This research, available publicly, is not commissioned or sponsored by any company.
- Commissioned, public papers and notes: This research, available publicly, is commissioned by a company, disclosed clearly in the paper’s disclosure section. You can find a prospectus for TCO analysis here.
- Custom and confidential forecasting and scenario planning: Used inside companies for confidential, strategic planning purposes. You can find a prospectus for custom forecasting services here and for scenario planning here.
You can find our non-commissioned and commissioned, public notes below.
AMD Brings New Value to Radeon (June 16, 2015)
The PC gaming graphics industry today has very few players but many problems. Today’s PC gaming enthusiast market essentially has only two graphics players remaining, AMD and Nvidia. There is a constant back-and-forth between these two players in terms of bringing new features that gamers want and need.
Microsoft’s new operating system, Windows 10, offers gamers new opportunities to stream games and content from their Xbox One to their PCs. This functionality is unique to Windows 10 and requires a certain set of parameters to be met. One of these parameters is ample PC hardware to perform the appropriate decoding for a smooth, high- quality game streaming experience. AMD’s newest 6 th Generation Processor for notebooks—codenamed Carrizo and launched in June 2015—features a special video decoder capable of decoding HD video such as Xbox One game streams efficiently, smoothly, and with very high quality. This report looks at how AMD’s 6 th Generation Processor notebooks are positioned to fulfill the needs of an Xbox One gamer streaming games to a PC.
Can Broadband Change the WAN? (June 16, 2015)
Competitive environments are forcing businesses to change their IT to drive more agility, enabling them to deploy new software and services quickly to capture more opportunity. This is driving many to use cloud-based applications where ready-made enterprise solutions can be deployed quickly, but what most do not consider is that this move can have a dramatic impact on their business.
Spring 2015 ONUG Meeting Highlights (June 3, 2015)
Today’s networking world is complex, antiquated and full of proprietary products. The Open Networking User Group (ONUG) was built by end users around the idea that open networking is essential to helping business by providing agility and flexibility. Unlike many other consortiums, ONUG is driven by the end customers, not the vendors. ONUG members put together the use cases, business requirements and functional requirements that are then shared with vendors. At this Spring 2015 meeting, ONUG reviewed the progress of their existing working groups. In addition, testing was added for the existing use cases, giving vendors a set of functional requirements to test against in order to verify that their solutions are meeting the needs that were outlined in the ONUG use cases.
Echelon’s Efficient Connected Lighting Solutions (June 2, 2015)
Echelon Corporation is a leader in connected, Internet of Things (IoT) lighting solutions. They give enterprises, governments, parking operators, and anyone who deals with large-scale lighting solutions the ability to provide their constituents with cost reductions as well as safer and more comfortable living and working environments.
Bringing Dev Ops to WAN Orchestration (June 2, 2015)
The complexity of today’s WAN environments breeds inflexibility and inefficiency. Managing the deployment and operation of WAN connections has proven to be both manual and time consuming, driving up costs to companies deploying them. Any delays in rolling out new services means that businesses miss out on opportunities that could drive new revenue streams. With Gluware 1.0, Glue Networks introduced a cloud-based Software-Defined WAN (SD WAN) orchestration platform that allowed businesses to deploy faster and create more efficient hybrid WANs, while also reducing both the deployment as well as operational costs.
HP’s Vision for Tomorrow’s IT (April 28, 2015)
The forces of mobility, cloud, security, and big data are driving a sea change in the role of IT. Their combined impact is creating new business opportunities while simultaneously transforming the customer engagement model. Faced with this environment, today’s traditional hardware-centric view of IT and slow pace of innovation cannot meet future business requirements in a world where latency and scale rule the day. IT must shift to a services-centric mindset and become a strategic business differentiator. To enable this, businesses will need to change how they approach their IT infrastructure in order to maximize business outcomes.
Fifty Shades of Open Networking (April 27, 2015)
Most networking vendors are getting on the “open networking” bandwagon, but unfortunately many appear to be approaching it defensively with a foot in both the proprietary and open camps. They see open networking as a hedge against a market that is pushing towards more openness, while they hope to keep selling proprietary products. The continuum of offerings being pitched as “open” networking spans from vaguely open to fully open. And while most customers are still in the investigative phase of open networking, the disparity of offerings is confusing and in some cases lackluster.
Datacenter Memory and Storage (April 8, 2015)
The server industry is in the middle of a once-every-20-years shift. What has happened in the past is a good indicator of what is about to happen. Given radical past transitions, it is very likely that many of today’s key technologies probably will fade out of the market by the year 2020.
Multivendor Datacenter Supply Chain Suits Multivendor Clouds (March 10, 2015)
Large service providers like Facebook and Amazon are undergoing rapid growth due to consumer adoption of mobile computing and emerging Internet of Things (IoT) devices and infrastructure. IT is at the heart of their business, and the datacenter is their factory. They are increasingly focused on total cost of ownership (TCO) which includes both the acquisition cost of their datacenter equipment and the operating cost of using this equipment during its lifetime.
Intel SDI Enables Internet of Things (IoT) Intelligence (March 3, 2015)
Intel’s concept of software-defined infrastructure (SDI) extends the definition of the software-defined datacenter (SDDC). SDI is a re-evaluation of system architecture driven by the requirements of business flow, workloads, and specific applications—not by a menu of hardware available to purchase at the moment. SDI could transform mainstream datacenters and has the potential to displace current datacenter infrastructure and highly available transaction processing systems by the end of this decade. Until now, SDI has been conceptual, but Intel is working to enable real-world usage models to turn the SDI vision into a reality.
IoT and Big Data Reshape Support (March 3, 2015)
The world of client support has been generally staid and predictable for years, with innovation focused on reducing cost and streamlining efficiency, often at the customers’ expense. The market is littered with either a confusing array of support choices or prepackaged offerings that miss key elements, none of which have kept up with the latest technology advances.
The Importance of Leading-Edge Modem Technology (February 27, 2015)
As power consumption of smartphone processors and displays continues to decrease, modems become a focal point of the power consumption discussion. The modem is, without a doubt, one of the most crucial parts of the smartphone in today’s connected society. With 4G LTE, users consume orders-of-magnitude more data than with 3G. Increased consumption, paired with the advent of cloud technologies, requires that smartphones always be connected to the network—always sending data back and forth. As a result, the modem and RF frontend have become pivotal components of the smartphone in enabling connectivity and doing so without impacting battery life.
Bringing Intelligence to the Cloud Edge (February 25, 2015)
The telecommunications industry is moving to cloud-based technologies at the network edge to help tackle the explosion of mobile video consumption, service mobility, virtualization of Customer-Premise Equipment (CPE), the Internet of Things (IoT), and other latency-sensitive applications. But unfortunately today, much of the compute still happens at the center of the network or on the client device. This situation results in more traffic, higher latency, and less flexibility.
NVIDIA Tegra X1 Targets Neural Net Image Processing Performance (February 25, 2015)
NVIDIA Tegra X1 (TX1) based client-side neural network acceleration is a strong complement to server-side deep learning using NVIDIA Kepler based Tegra server accelerators. In the software world, “function overloading” refers to using one procedure name to invoke different behavior depending on the context of the procedure call. NVIDIA borrowed that concept for their graphics processing pipelines. They essentially are “overloading” their graphics architecture to be a capable neural network processing architecture. Moor Insights & Strategy evaluates the Nvidia Tegra X1 in the context of neural network acceleration.
Why The Industry Needs Technologies Like AMD FreeSync (January 7, 2015)
For quite some time the gaming hardware industry has been looking for ways to resolve the age-old problem of screen tearing caused by the confusion between the GPU and monitor. This primarily occurs on systems where the GPU (graphics processing unit) is capable of generating frame rates far greater than what the monitor is capable of. The original solution to this problem was the creation of VSync (vertical sync) which reduced the frame rate of the GPU to either 30 or 60 FPS in order to smooth out the frames and reduce the possibility of frames being generated out of sync from the monitor’s refresh rate.
Moving Beyond CPUs in the Cloud: Will FPGAs Sink or Swim? (December 2, 2014)
General-purpose server processors are reaching “diminishing returns” limits, as performance-per-watt improvements slow and workloads become more specialized. Certain workload classes are open to acceleration by compute offload or alternative (non-CPU) architectures including digital signal processors (DSP), graphics processing units (GPU), field programmable gate arrays (FPGA), and custom logic. While these accelerators historically have been attached to CPUs via offload interconnects, they increasingly are being integrated onto system-on-chip (SoC) designs. As these technologies mature, Moor Insights & Strategy believes that datacenter workloads deployed at scale will use application-specific acceleration models.
Highlights of the Fall 2014 ONUG Meeting (November 13, 2014)
On October 28th and 29th, a host of IT technology and business professionals joined together in New York City at the offices of Credit Suisse to discuss their challenges, collaborate towards a common set of needs that could be shared with the industry and share best practices with regards to overcoming the limitations of today’s networking. With a strong showing from the financial community, along with key representatives from other sectors like pharmaceuticals, retail and transportation, the meeting helped to cement the future of open networking.
Software Designed Measurement & Control (November 6, 2014)
The Internet of Things (IoT) is growing at a rapid pace and is generating explosive amounts of data that must be gathered, analyzed, managed, and turned into business insights. The entire IoT value chain is comprised of devices that are both intelligent and connected; the complexity of this ecosystem has significantly increased the demands of test, measurement, and embedded design tools. In addition to the added complexity, the next generation of users expect more: connectivity from anywhere, intuitive user experience (UX), industry specific capabilities, and the ability to deal with Big Data challenges such as data ingress, data health management, and Big Data analytics.
High Availability For Private Clouds (November 3, 2014)
When moving from traditional IT to private cloud, there is generally a tradeoff between elasticity and availability, so only applications that do not demand the highest levels of availability can move to the cloud. Coding high availability (HA) into cloud applications generally has been complex and laborious; ongoing maintenance creates both a cost and liability for the developer. With their always-on enterprise knowhow, Stratus is now bringing the high availability of their enterprise IT solutions to the world of private clouds. Stratus is developing a suite of software solutions that enable rapid deployment of always-on workloads within OpenStack clouds. With a beta program underway, they are already seeing success and are now expanding their initial offering based on program learnings.
Is Dell Driving the Open Future of Networking? (October 28, 2014)
While enterprise computing and storage have both gone through a standardization metamorphosis, enterprise networking has lagged. Enterprise networking remains one of the few areas of proprietary vertical integration in IT. But the need for businesses to move faster and be more agile is putting pressure on IT to adjust to strategies that will help their businesses keep pace with the frenetic rate of change.
The First Enterprise Class 64-Bit ARMv8 Server: HP Moonshot System’s HP ProLiant M400 Server Cartridge (September 29, 2014)
Ubiquitous cloud-enabled smart devices are a driving force behind a major shift in IT infrastructure. Service providers deploying context-rich services to these devices are building massive new datacenter capacity and looking to their vendors to optimize infrastructure for their specific workloads. But given the rapid rate of workload and application evolution, infrastructure optimization will be a continuous process for at least the next few years; optimization demands flexible hardware and software infrastructure.
HP: Protecting Printers With Enterprise-Grade Security (September 22, 2014)
In today’s Internet of Things (IoT) environment, security is not only critical, it’s essential. Newly connected IoT devices such as thermostats, vending machines, HDTV’s and wearables create some of the largest potential security gaps in the IT infrastructure. As users demand mobility, cloud-based applications and collaboration, both inside and outside their company or organizations firewalls, the threat of a malicious attack or industrial espionage is around every corner.
Wireless Technologies for Home Automation (July 10, 2014)
Simply put, there are too many choices for today’s low-end mainstream1 and DIY2 consumer when it comes to connecting the new breed of Home Automation (HA) equipment. With the plethora of thermostats, lights, locks and consumer friendly HA devices hitting the market from Nest, Philips, Belkin, Honeywell, Insteon, Schlage and others, it has become increasingly difficult for consumers who desire to install their own systems or work with a low-end installer, to decide which technologies make logical sense to control their entire home. Apple and Google have just begun to approach these markets, and with no clear technology leader, average (and not-so average) consumers are left to guess which direction to spend their money.
The Rx for 5G RF (June 30, 2014)
The mobility and Internet of Things explosion has led to a severe wireless spectrum shortage, driving researchers to seek new ways to alleviate the bandwidth crunch. Wireless researchers now plan for a 5th generation wireless standard (5G), which is expected to provide a 1000x increase in network capacity and arrive at the end of the decade. To enable 5G, the wireless research community is seeking new ways to improve efficiencies in prototyping, validation and test of next generation technologies.
Can IBM Revitalize 8P with x86? (June 23, 2014)
The need for greater agility is driving IT strategies. The importance of scale up applications is growing, but a different innovation cadence is needed. Applications and data streams are becoming more complex, requiring additional processing power and larger memory footprints for these scale up applications. With cloud-based applications, large virtualization pools, and ERP/database all running on x86 platforms, there is a need for 8 CPU servers that can run standards-based backend workloads or create a large consolidation point for virtualized workloads. The scalability of 8 CPU x86 servers combined with their robust platform availability make them an excellent choice for deploying critical applications. In an era of rapid innovation and changing IT patterns, these products can be the right choice for demanding applications.
Hyperscale and enterprise datacenters have become increasingly conscious of efficiency, ensuring the optimal amount of hardware resources are dialed-in to the specific needs of their workloads. For many workloads, density-optimized servers have evolved to deliver on the promise of OPEX savings (power/cooling, space) without sacrificing system performance.
HP is expanding their density-optimized portfolio with the release of the Apollo 6000 System. This new server is designed to address the needs of lightly-threaded HPC applications like Electronic Design Automation (EDA) and Monte Carlo Simulations (used for financial risk modeling and various engineering/scientific applications). The system offers high per-thread performance, robust network bandwidth, and rack-level shared infrastructure for efficiency.
HP Apollo 6000 and 8000 Advanced Cooling Solutions (June 9, 2014)
CPU transistors have gotten smaller, and server component power has dropped consistently with each new generation. But datacenters are still up against the proverbial wall with pressures on both power and cooling, as they attempt to maximize their use of datacenter floor space and compute resources. Compute density has increased dramatically over the last decade, pressuring datacenter infrastructure. Virtualization and system density drove rack-level power consumption up at the same time that platforms were trying to drive it down. To make a serious change in power and
cooling profiles, datacenters (and vendors) need to re-think the servers that they are deploying and alter their datacenter strategies.
We don’t believe that most datacenters today are ready for a complete transition, but instead these new form factors can help augment existing strategies and drive a better overall operational mix.
Rack Scale Server Segmentation 2014 (June 9, 2014)
Moor Insights & Strategy has published commentary and detailed analysis describing the changing scale-out datacenter market for almost two years. In that time, datacenter server hardware segments and descriptive terminology have become fairly complex.
Service oriented datacenters must balance their mix of compute capability and power consumption within an increasing density per cubic meter of datacenter space. Optimal efficiencies with increasing density can only be achieved through advances in network architecture and specialized computing. There is no way to meet the future datacenter needs without changing our approach away from individual server chassis and toward rack and datacenter level architectures.
AppliedMicro’s X-Weave architecture enables datacenter system architects to build dense, shared memory space servers at a rack level using simple “star” network topologies from top of rack (TOR) switches. AppliedMicro’s X-Weave Gearbox2–MLG (Multi-Link Gearbox) product aggregates up to ten 10Gbps Ethernet channels connected to server nodes into four 25Gbps links connected to a TOR. This 5:2 reduction in cables and connectors to a TOR enables architects to pack more server nodes into a rack without requiring mid-rack switches as a means to prevent blowing out TOR port counts.
Is “Scalable Blade” an Oxymoron? (May 19, 2014)
Blade servers solved the problem of density for businesses, but they did so with compromises that limited their appeal for many applications. As complexity of both applications and environments grew, blade infrastructures were limited in their effectiveness for the task. Customers were relegated to lower-density rack servers for their more scalable applications. And while purpose-built systems allowed a business to tailor to their specific needs, they created more management complexity.
Segmenting the Internet of Things (IoT) (May 16, 2014)
People across a variety of industries have been looking for a useful, functional model of the Internet of Things (IoT) to frame their technology and product investments. This paper presents our vendor-neutral and technology-neutral segmentation, with as little jargon as possible. It is both simple to understand and comprehensive.
Intel SDI and OpenStack Tackle Private Clouds (May 11, 2014)
OpenStack has gained critical mass in the past 18 months and is establishing itself as a credible on-premises private cloud—able to burst to public clouds such as service provider OpenStack deployments, Amazon Web Services, Google Cloud Engine, and Microsoft Azure. Although strong support from the open source development community has enabled significant progress, OpenStack still lacks some of the key capabilities that enterprise users require to migrate their current workloads to the cloud.
Finally, Telemedicine That Works (May 10, 2014)
The healthcare industry is under extreme pressure to transform itself. Telemedicine was always viewed as a promising technology to drive this transformation, but the cost and inflexibility of most solutions, combined with varied and inconsistent policy hurdles, prevented telemedicine from becoming a widespread reality. Now, however, a new direction in telemedicine and the governing body’s policies around it actually makes the technology viable, allowing it to finally transform an industry that is practically on life support at this point.
Pay Attention IT: A New Convergence is Afoot (April 28, 2014)
New demands from consumers, employees, and customers are placing enormous pressure on organizations: it’s all about competitive advantage. Organizations need to be more agile, grasp opportunities as they arise, and do it all faster than ever before. As expectations around IT change, new business models and opportunities are up for grabs. To take advantage of this value, organizations need to implement the right combination of strategies, processes, and infrastructure. A piecemeal approach to leveraging new technology—in the midst of a fast-paced market—could leave businesses disaggregated and left on the sidelines by faster competitors.
Those who have converged technology trends to produce an agile IT strategy will have the systems, processes and organizations to work at an accelerated pace. Increasingly, technology capabilities have become the determining factor in an organization’s ability to succeed.
IBM Announces POWER8 with OpenPOWER Partners (April 23, 2014)
IBM is reinforcing its newfound open processor strategy with a POWER8 processor and servers that target cloud and big data solutions. IBM typically claims that new POWER chips offer superior performance than Intel Xeon, the industry leader, and this chip is no exception. POWER8 will be well received by IBM’s traditional scale-up AIX installed base.
Stratus Cloud Solution Beta: SDA for OpenStack Private Clouds (March 31, 2014)
OpenStack has critical mass and is the de facto standard for deploying open source-based clouds. (See our OpenStack point-of-view here.) With increasing levels of investment by service providers like AT&T, Comcast, and Rackspace and infrastructure vendors like Cisco, EMC, HP, IBM, Intel, and Red Hat, OpenStack private/hybrid clouds are becoming a strong alternative to traditional IT models for large enterprises. While moving to the cloud offers the promise of flexibility, scale, and cost savings, a standard cloud model is not inherently designed for enterprise-level availability. With multiple points of risk at both the hardware and software layers, many organizations are hesitant to move their business-critical applications to the cloud as potential costs may outweigh benefits.
Camera sensors and video-capable displays are following Moore’s Law declining cost and increasing quality curves. They are now ubiquitous in mobile computing devices (smartphones, tablets and notebooks). As camera module costs continue to decline, a growing range of low-cost consumer and commercial devices will be video-enabled, including many types of Industrial Internet of Things (IIoT) endpoints.
Software Defined Availability (SDA): Critical for Managing Datacenter Scale (February 5, 2014)
Many people think about web and cloud services such as Amazon Web Services (AWS) as “always available”. However, these services have poor availability compared to the high availability (HA) and fault tolerant (FT) IT services that are deployed for processes that must not fail. An example of a process that must not fail—handled by FT systems—is that part of the credit card transaction flow where one bank has subtracted funds but the other bank hasn’t added them yet. Processes like these have tended to run on expensive, explicitly HA hardware. Now, the massively-replicated hardware infrastructure underlying hyperscale services has the potential to lower the cost of HA solutions. Lower costs can be achieved by shifting the focus away from expensive, explicitly HA hardware toward mainstream commercial hardware with software-based availability.
Software Defined Infrastructure (SDI) is Intel’s Future for the Datacenter (January 29, 2014)
Moor Insights & Strategy believes that IT is on the cusp of a major datacenter architecture transition. This transition will be driven by 24×7 global business reach, dramatically increased use and depth of business intelligence (BI) and predictive analytics (Big Data), and pushing sensors and intelligence into our physical world in the form of the “Internet of Things” (from datacenters to wearable consumer electronics). It is impossible to predict exact technology directions even in a three to five year timeframe, but the industry is starting to form a good, high-level framework for the future of IT operations.
Are Software Defined WANs Ready For The Enterprise? (January 28, 2014)
For too long, enterprises have been locked in a world where the advanced Wide Area Network (WAN) functionality that was required to help expand their businesses and make them more agile has been unattainable due to cost or complexity (or both). As the global business environment continues to speed up, and companies need to be more agile to either gain a competitive edge or just keep up, the need for an automated WAN orchestration has gone from a dream to a necessity. With bandwidth being pushed to its limits, the high cost and relatively low bandwidth of Multiprotocol Label Switching (MPLS) lines are being challenged by broadband for businesses. But broadband does not offer the same features or dependability that enterprises have come to rely on with MPLS for their WAN services. Just as Software Defined Networks are bringing an opportunity to redefine the structure and economics of the data center, can Software Defined WANs do the same for the WAN space? Glue Networks is one vendor that is delivering a solution in this space that merits a close look if your enterprise is relying on Cisco routing today and you demand more flexibility or better performance from your WAN.
Dell Opens Datacenter Networking with Cumulus Networks (January 28, 2014)
Highlights of the Dell – Cumulus Joint Announcement on 1/28/14
- Dell and Cumulus Networks are partnering to enable the Cumulus Linux network operating system to be integrated on Dell Networking S4810 and S6000 switches
- Dell is the first major networking vendor to offer a pre-installed Cumulus solution alongside its own networking operating system – straight from the factory
- This partnership lets enterprises align their networking needs to application and network deployments with a factory integrated solution
- Delivers a common acquisition, deployment and operational model for single source fulfillment and optimized supply chain
Quanta’s Server Business: Can They Scale Beyond Hyperscale? (January 25, 2014)
Large-scale social media, cloud and search engine providers have become increasing focused on optimizing the capital expenditures for their datacenters. While these hyperscale customers have historically relied on the leading global OEMs (Dell, HP, IBM-now Lenovo) for their server infrastructure, the largest of these companies (i.e. web giants like Google, Facebook and Amazon) have “eliminated the middleman” and now specify and buy servers directly from the Taiwanese companies (ODMs) who design and manufacture the servers on the OEMs’ behalf.
CES 2014 Wearable & Fitness Tech Trends: Going Mainstream (January 18, 2014)
CES has changed considerably over the past 10-15 years. As the PC is a piece but no longer the center of the universe, many new technologies touching many industries are emerging and have taken over the buzz at the show. Computerized cars, robots, drones, smart homes, and fashionable music accessories are everywhere. There were even companies like Yellow Jacketturning your iPhone into a personal Taser. Really? I can just see the application with fitness devices for added motivation. Another major trend growing in the market was a major increase in noise at the show, the trend of wearable computing and technology intersecting the sport, health, and fitness markets. CES 2014 even featured a separate dedicated full day FitnessTech session and Tech Zone on show floor.
Dell and Red Hat Collaborate to Deliver OpenStack for Enterprise (December 12, 2013)
We believe that this broad collaboration between Dell and Red Hat is transformative to enterprise IT private cloud deployments. RHEL OpenStack Platform has a good chance to be perceived as the “most standard” version of OpenStack, potentially providing Dell advantages with enterprise IT clients favoring more open implementations.
Dell Data Center Solutions (DCS), A Strong Alternative to DIY in Hyperscale (December 12, 2013)
Over the last decade, large search engine, social media, and cloud providers have built giant datacenter capacity to power their internet services, and they have found themselves needing a new type of server to support their massive scale. For these organizations, their IT infrastructure is the cost of goods sold for the services they provide. They depend on the lowest total cost of ownership (TCO) for their datacenter infrastructures to maximize profits. Moor Insights examines why the largest web giants are buying Dell.
Fitness Wearables- Who Is Positioned To Win In This Emerging HIoT Market? (November 18, 2013)
Moor Insights & Strategy recently published a paper entitled Behaviorally Segmenting the Internet of Things showing how IoT has behaviorally split into two primary segments,the Human Internet of Things (HIoT) and the Industrial Internet of Things (IIoT). Inside the HIoT, there is a vertical industry segment, the fitness & health wearables market, which is in its early stages and will see major growth over the next 3-5years. The winners and losers will be driven by companies that are able to establish themselves as true experts in health & fitness and become a trusted advisor on how to help their users reach personal health & fitness goals. This will require more complete tracking and a focus on Big Data analytics that turn data into meaningful insights.
Lenovo ThinkServer RD540 and RD640- Small Steps Forward (November 3, 2013)
With Lenovo’s latest server announcement on October 16th, we were hoping to see something very different, but we’re still seeing, for the most part, more of the same. The ThinkServer RD540 and RD560 announcement was Intel-leveraged with little to differentiate the Lenovo ThinkServers from the competition, and little to cause customers to move away from their current Dell, HP and IBM platforms and consider Lenovo. There were 3 key observations we recognized that we would like to point out, none of which we believe will help Lenovo break out of their current market share position.
Connecting with the Industrial Internet of Things (IIoT) (October 29, 2013)
This paper continues the Internet of Things (IoT) market segmentation Moor Insights & Strategy started in the previous research. note, Behaviorally Segmenting the Internet of Things (IoT). Here we compare the Industrial IoT (IIoT) and the Human IoT (HIoT) at and near their end-points. Our comparison highlights near-term IIoT brownfield opportunities.
Behaviorally Segmenting the Internet of Things (IoT) (October 23, 2013)
The industry needed a useful, functional model of the Internet of Things (IoT) to frame recent developments in the space. But Moor Insights & Strategy could not find one that was sufficiently vendor-neutral, technology-neutral, and jargon-neutral, and at the same time, both simple to understand and comprehensive. So we created our own. Unlike previous attempts, we created an IoT segmentation that is almost entirely defined by behaviors rather than by technology.
Software Defined Networking and Emerging Server Form Factors (October 14, 2013)
Virtualization on standardized hardware is a key IT trend that began in the late 1990’s with the consolidation and virtualization of storage using SAN and NAS technology. Costs plummeted, customers had greater control and deploying/ re-provisioning became a more seamless and agile process. Then in the early 2000’s, compute became virtualized on x86 platforms, bringing those same benefits to the processing front. Today, the final step of virtualization – network virtualization – is in vogue but as this technology comes into prime time, there may be differences in how it is deployed and how quickly customers move to it. The intersection of network virtualization with changing server form factors, most notably in the largest cloud customers, may present some interesting challenges.
Understanding Lenovo’s Server Position (September 30, 2013)
In today’s server market, there is much focus on the big three, Dell, HP and IBM, who collectively hold approximately 70% of the unit shipments (68.4% in Q2 2013 according to IDC). The top two, Dell and HP truly control the market volume with IBM holding a very distant third, having declined over the last few years to just 11% of overall server unit shipments as of Q2 2013. Talk often focuses on Cisco, who is growing quickly particularly in blade servers but off of a very small base, as they are pushing hard to get themselves into the high volume market. Rarely does the name Lenovo come up unless someone is discussing the IT market in China, but Lenovo remains one of the few players who could break out of the “other” category in the market share reports, even surpassing Cisco and Fujitsu (the current #4 and #5). However, to truly grow share, Lenovo needs to put a concerted effort on servers and has more than a few challenges to overcome.
Dell and Oracle Jointly Improve Their Cloud Ecosystem Competitiveness (September 25, 2013)
This set of partnership announcements between Dell and Oracle might seem tactical. At face value it looks like Dell is defining a new top tier category as an Oracle sales channel. The agreements give Dell some core enterprise IT database goodness and access to Oracle’s impressive enterprise accounts, and reciprocally they give Oracle access to proven high volume hyperscale infrastructure, which enables conversations with a smaller number of very high volume hyperscale customers. More importantly, these first announcements start the gears moving – they get customers thinking of Dell and Oracle in the same sentence with an initial set of projects for which both companies should find it easy to fulfill their obligations. We think that today’s announcements are the beginning of a beautiful friendship.
Are Wimpy Cores Good for Brawny Storage? (August 28, 2013)
When could “wimpy” cores beat brawny ones? They have potential to do so in large-scale distributed storage deployments. This paper explores performance and resiliency trade-offs enabled by using Calxeda’s ARM-based EnergyCore processors and their fabric-based system level architecture as the underpinning for a Ceph distributed object store implementation.
The Battle of SDN vs. NFV (August 27, 2013)
For the past 20 years there has been a slow, methodical pace of change in networking where each set of new technologies arrives, surpassing the previous generation in an orderly fashion. But today, two new technologies, Software Defined Networks (SDN) and Network Functions Virtualization (NFV), are poised to truly disrupt that pace by changing networking from physical to virtual. In the enterprise market there appears to be a lot of confusion about the differences between these two, often people compare the two and say one is a better approach than the other. If you are looking for the smack down article that pits these two emerging networking methods against each other in a glorious battle for three-letter acronym supremacy, you’ve come to the wrong place. Both SDN and NFV have a lot in common with each other. In actuality, the two can coexist in the same network environment and share a lot of the same characteristics and components. What makes them different has more to do with where they started and who is deploying them than anything else. Both methodologies have a similar goal: reduce the cost, complexity and rigid nature of networking, enhancing the physical network with a virtual overlay that is easier to deploy, manage, reprovision and troubleshoot – all with a lower OpEx and CapEx profile.
Intel’s Disaggregated Server Rack (August 20, 2013)
Does “Disaggregation” Really Mean Anything? There’s been a lot of discussion about “disaggregated” servers, racks, and datacenters since Facebook and Intel promoted their vision for the phrase at the Open Compute Summit at the start of this year. Haven’t we spent the last few decades disaggregating datacenter architecture? And if so, what does disaggregation mean now? Is it something different? Moor Insights & Strategy explains Intel’s disaggregated server rack and looks at the implications of Intel’s disaggregated server rack and the impacts on companies.
ARM Mobile GPU Compute Accelerates UX Differentiation (July 15, 2013)
Users continue to demand more from their mobile devices and many mobile device designers are using Mali-T604-based SoC products today to meet those ever-increasing demands. Additionally, designers are starting to enable and enhance mobile device user experiences through GPU compute. OEMs and software vendors are investing to accelerate image processing, computational photography, game physics, and video processing for both internal and external high-resolution displays. Moor Insights & Strategy looks at today and tomorrow’s state of mobile GPU compute and examines ARM’s new Mali- T622 and how it fits into that future.
How to Intelligently Built an Internet Of Things (June 27, 2013)
The Internet of Things (IoT), Internet of Everything (IoE), Big Data, Machine-to-Machine (M2M), and related concepts are all generating an increasing amount of hype. The high tech industry is looking beyond mobility to ambience: sensor-enabled systems of systems are transparently enriching peoples’ lives within a seamless set of intelligent environments. The current methods for managing enterprise IT infrastructure will not scale to meet the demands of IoT’s systems of systems. In this note, we take a look at how to build an IOT and take a look at competitive efforts in the space.
Two dominant consumer wireless audio standards exist in the installed base today, Bluetooth and Apple’s Airplay. In spite of their popularity, neither standard was developed to deliver minimal setup, high quality audio across a myriad of different products. SKAA is an alternative wireless audio standard that could challenge both Bluetooth and AirPlay. The two primary reasons for this are SKAA’s focus on ease of connection, synchronized broadcast capability and quality of service. Moor Insights & Strategy is recommending that the consumer audio ecosystem take another look at the SKAA consumer wireless audio standard for their implementations. Moor Insights & Strategy look at the pros and cons of Bluetooth, Airplay and the potential for SKAA to emerge as a premium wireless audio standard.
AMD SeaMicro: An Accelerator for Hyperscale Workloads (April 24, 2013)
AMD SeaMicro’s disaggregated server enables large and small data center operators to optimize their hardware performance profile for specific applications. Today’s modular x86 servers are compute-centric, designed as a least common denominator to support a wide range of IT workloads. Those generic, virtualized IT workloads have much different resource optimization requirements than hyperscale and cloud applications. They have resulted in a “one size fits all” enterprise IT architecture that is not optimized for a specific set of IT workloads, and especially not emerging hyperscale workloads, such as web applications, big data, and object storage. Moor Insights & Strategy takes a look at AMD SeaMicro’s disaggregated servers.
HP Moonshot: An Accelerator for Hyperscale Workloads (April 8, 2013)
Datacenters are unprepared for the upcoming onslaught of future mobile connected devices. These are more than smartphones, they are a new category of connectedness called IoT, or Internet of Things. With today’s 24 year old architecture, it will be impossible to meet the datacenters needs. HP’s solution to this is HP Moonsot system, a modular and ecosystem approach, targeted at specific workloads. In this white paper, Moor Insights & Strategy examines HP’s new Moonshot system and servers in the context of the scale-out datacenter challenges, needs, megatrends and innovations.
The Apple iPad had nearly a three year head start in the enterprise in extreme low power tablets. While Windows tablets have been around for over 20 years, they never were able to crack into the enterprise in significant volumes. They were heavy, thick, fragile, expensive with limited battery life. Even though enterprise IT had to incrementally spend time and resources, they deployed iPads because there was no viable alternative. With the combination of Intel’s Clover Trail-based Atom Z2760 and Windows 8 Pro, HP, Dell and Lenovo have introduced a new breed of tablets that have the end user benefits of the iPad but with the enterprise friendliness of a Windows PC. Moor Insights & Strategy take a look at these new enterprise tablets, compare and contrast them to Apple’s iPad, and make recommendations to enterprise IT.
NextIO Enables Multivendor Converged Datacenters (January, 2013)
Industry standard x86 server node performance has been improving at an increased pace as core counts and memory capacities have increased. This rapidly increasing compute density is driving a commensurate increase in the costs of implementing in-rack network architectures. The challenge is how to maintain a multivendor, best-in-class datacenter. One good solution is to consolidate first level network and storage resources in a manner that is transparent to existing software stacks. NextIO’s vNET I/O virtualization and consolidation appliances are a well-positioned, practical solution to this challenge.
PC and mobility technologies may look similar today, but come from very different beginnings and points of differentiation. The two market’s manufacturers and technologies have been on a collision course for years, but finally intersected in 2012 with NVIDIA’s and Qualcomm’s PC attack with Windows RT, and Intel’s launch of Medfield and Clover Trail into mobile markets. This paper analyzes mobility and PC industry players, their strengths and weaknesses, history, technologies, and finally, future differentiating technologies required to win in the market.
NVIDIA’s 2nd Generation Maximus: Dawn of the “Hybrid Designer” (November, 2012)
NVIDIA is shipping their latest Kepler-based, 2nd generation Maximus solution. Maximus utilizes Quadro and Tesla cards inside workstations combining visualization with compute for manufacturing, media and entertainment, and energy markets. The 2nd generation Maximus solution enables the melding of design and simulation, resulting in substantial enterprise value but also ecosystem disruption. Moor Insights & Strategy explores NVIDIA’s vision, the challenges Maximus solves, the improvements over 1st generation Maximus, potential for a new category of designer and the ecosystem impacts.
HSA Foundation: Purpose and Outlook (November, 2012)
AMD, ARM, Imagination Technologies, MediaTek, Qualcomm, Samsung and Texas Instruments have come together to found the HSA (Heterogeneous System Architecture) Foundation, or the HSAF. The HSAF is an open, industry standard consortium founded to define and deliver open standards and tools for hardware and software to fully take advantage of high performance of parallel compute engines, and do so in the lowest possible power envelope. This new environment aims to enable rich new user experiences never been seen before, and done at incredibly low power. Moor Insights & Strategy looks at the HSAF goals, benefits and risks.
Calxeda: Rack Trumps the Chip (October, 2012)
Calxeda has designed an innovative rack-level network fabric architecture for these new IT Services datacenters. Calxeda’s architecture today connects dozens of densely packed, independent server nodes, and will scale in the future to deliver greater operational efficiencies across these new service-oriented mega-datacenters. Moor Insights & Strategy analyzes the new Calxeda roadmap and technologies in the context of the needs of the largest data centers.
NVIDIA VGX Technology (May, 2012)
NVIDIA launched one of the most significant initiatives in the history of the company that, if delivered as promised, could propel them into a position as a top enterprise technology player. NVIDIA launched VGX at their annual GPU Technology Conference (GTC). Moor Insights & Strategy analyzes the new NVIDIA VGX platform.