If there was any question that Microsoft wants to be everyone's data analytics shop, the company erased those doubts with its Monday presentation. In fact, Microsoft laid out plans for how the next iteration of many of its products -- SQL Server, Microsoft Azure, and its Hadoop-powered analytics components both inside and outside of Microsoft Azure -- are being designed as a single data platform.
Certainly, this was Microsoft's aim even before Satya Nadella replaced Steve Ballmer. Microsoft was linking SQL Server and Hadoop back in 2011, added connectivity for SQL Server with the Azure-hosted version of Hadoop in 2012, and expanded the ingestion, storage, and processing capacity of Azure along the way. But with Monday's presentation, Microsoft clarified how it all ties together.
The first and most obvious ingredient is SQL Server 2014, with new features like Azure connectivity and in-memory processing. The good stuff isn't mere hype; InfoWorld's Test Center looked at SQL Server 2014 and came away impressed by its performance gains, high-availability functionality, and backup-to-Azure options. But having SQL Server as the sole cornerstone for a big data or data analytics operation is a little like having a cargo plane as your only form of air travel, instead of a 767 for passengers or a twin-engine prop for quick getaways.
Small wonder, then, that SQL Server is getting complementary technology in the form of the Analytics Platform System (APS). It's close to the query-everything approach now used by Microsoft's Hadoop partner Hortonworks and by EMC/VMware Hadoop spin-off Pivotal. Each company boasts of the ability to tap into multiple data sources, including Hadoop.
APS, Microsoft's successor to its Parallel Data Warehouse product, does the same in reverse, pulling data in from Hadoop and more conventional sources. The details of how this works remain skimpy, but it's presented as an appliance (Microsoft calls it "big data in a box"). The company also hinted that APS can be used to take existing data confined within an organization, export it to Azure, process it there, and work with the results in tools like Excel.
Also in the mix is the ostentatiously named Microsoft Azure Intelligent Systems Service (ISS), "a cloud-based service to connect, manage, capture, and transform machine-generated data regardless of the operating system or platform." ISS runs on Azure -- a sure sign of where the ingested data is meant to be stored, analyzed, and processed. Again, Pivotal comes to mind as one of its boasting points was how data can be processed where it's already stored.
This last offering bears more than a passing resemblance to Amazon's Kinesis service, although Amazon's approach to data analytics is far more of a DIY kit, where the user is provided with various pieces (data ingestion, data storage, data analytics) that have to be cobbled together by hand. Microsoft envisions a situation where most of that heavy lifting has already been done, if only by dint of having all the products bear Microsoft's logo and sport a degree of native interconnectivity, and where existing Microsoft tools (the Hadoop-powered HDInsight and Power BI for Office 365, to name two examples) are used.
The cutesy formula Microsoft used to describe the different parts of its platform was "[data + analytics + people] @ speed." The "people" side of the equation once again includes the likes of connectivity to data through Microsoft Excel -- a smart way to keep part of the desktop edition of Microsoft Office relevant to business users. That's also a good metaphor for Microsoft's approach to the issue of big data: by having something to offer everyone somewhere in its stack -- and maybe being able to offer everything to someone.
No comments:
Post a Comment