Header Image
>> Home | Blog | Resumé | Presentations | Contact

On the MS Technet Radio Win VS Linux show

Open Discussion on Windows and Linux - A response by a Windows & Unix Sysadmin & Programmer

After this was brought up in a slashdot article I've taken the opportunity to respond and comment. If anyone on either side of the issue wants to get in touch, either use the contact form on my web site, or give me a phone call (Mobile: +61-410-746-120).

As this was finished at 2am on a Friday morning it has some rough edges. This is not the work of anyone other then myself.

Also if anyone from Microsoft wants to talk to me I'd love to have a dialog about how and why the businesses that I work with use OSS and why they haven't gone to MS (or back to for some cases)

The transcript below is directly from the MS site, with only three changes:

  1. It has been reformmated to fit with my site
  2. Some of the broken links (I assume added by MS' CMS) have been fixed
  3. Removing the MS smart quotes which also broke the flow of text, some have been replaced by regular quotes, some removed entirely. In some cases nearby words have had to be moved to fix the text flow.

Naturally you can check the origional on the MS site to ensure that I haven't altered the meaning of any passage

My comments are all in blue.

If I get enough comments on this I might add them in like this.

TechNet Radio:Open Discussion on Windows and Linux

Published: December 11, 2004


Martin Taylor:
Hi, I'm Martin Taylor. Welcome to the third installment of TechNet Radio. I'm the General Manager of our Platform Strategy, and in today's broadcast we'll be having an open dialogue on the comparative differences between Linux and Windows. After the broadcast is over, you can also listen to this in a streaming format, you can download the full transcript, and also you can have links to more information on the topics that we'll be discussing.

As General Manager of Platform Strategy, I'm responsible for ensuring that our customers understand the benefits of the Microsoft platform. I also spend a fair amount of time doing a level of comparative analysis, making sure our customers understand the differences between Microsoft and some of the key alternatives in the marketplace, specifically Linux and open-source alternatives. Today, Bill Hilf and I will be spending time talking about that. Welcome, Bill.

Bill Hilf:
Hi, I'm Bill Hilf, Lead Program Manager in the Platform Strategy team. I lead our Linux and Open Source Technology Analysis Center here at Microsoft.

Martin Taylor:
So, Bill and I are here today to discuss the similarities and differences between Windows and Linux and open-source alternatives. Microsoft believes that customer needs drive the competitive debate. We know the only way we win with customers is by having a much better solution to offer our customers in making sure that we're addressing their pains over and above Linux and open-source alternatives. We invest significant amounts of resources to understand the priorities of our customers, particularly those developing IT solutions for their businesses. So, today we'll focus on some key scenarios that our customers have told us are really important to them: TCO, security, risk management, just to name a few.

Unfortuantly I feel that inertia is also a large reason for the continuing use of MS products, but that brush can be applied to many situations.

There are quite a few topics that we'd like to talk about today, however, before we get deep on any of those, we think it's important to take a step back and look at the broad landscape and correct a few common misperceptions that exist in the marketplace. I have the opportunity to spend about a week a month outside of the U.S., meeting with many governments and partners and customers around the world, and I frequently spend time with a good section of Fortune 100, Fortune 500 customers that come and visit us at the Microsoft Executive Briefing Center here in Redmond. And there are a few things that I normally have to talk about that might be top-of-mind for them that they'd like us to get some clarity on. For example, one thing that normally comes up is that Microsoft is anti-open source, and they've used some of our activities as Microsoft versus open source. This is definitely not the case. Yes, we do find ways to show the value of the Microsoft platform compared to what the output of open source, mainly the products Linux, Apache and other things. However, at the highest level, open source is a development model.

Exactly, it is also a set of views and some methodologies, which support that.

Microsoft has learned some things that exist in that development model that we can bring into our development models, and we've done that. Microsoft also has participated and contributed some technology to the open-source community as well, and so we understand that open source is a development model; it's a way that people can build products both internally in their IT organizations as well as externally in the community. That being said, Microsoft does find a way to show value for the products or the output, let's just say, of some of the open source projects. The other thing that frequently comes up is everybody in the world has embraced open source, how come Microsoft isn't embracing open source? And again, I think it's very important, first of all, to understand what the motives are and the directives are of some of the companies that you might think are embracing open source. As an example, IBM is frequently touted as a company that's embracing open source, when really, when you take a look at it, they're embracing Linux as a platform to show value to customers through their global services business, through their hardware business and through their proprietary software offerings. However, they don't have a roadmap to open source DB2. They don't have a roadmap to open source WebSphere. So they see a benefit for some open-source products when they don't have a strong platform or product in that space. The same can be said for Oracle; the same can be said for even a company like CA that recently has open-sourced its Ingres database. Again, they have a lot of proprietary software that they extract value from and where they don't have a great offering themselves or they want to get some level of ubiquity in the marketplace or they've end-of-lifed the product but want to make sure that customers have a roadmap — then they've chosen to open-source it. So it's very different than full-out embracing of open-source technologies. That being said, I think it's also important to understand that open source does not equal open standards. Bill, I know that you spend a lot of time with customers on this discussion as well, regarding open source and open standards.

Again right on the money, there are a lot of companies like those described. One that I'm currently working for is considered an "Open Source company" even though our core product is not open source (it is internal, and most of the product is our services and experience), however because the platform that we build on is OSS we are able to go in and fix things, and not just say "There's a problem" we can say "There's a problem, and it's here, and here's a fix" (or here's some diag output). My view (and that of many others) is that to build something you need to understand one level below your work, for example if I'm writing an application in PHP I should have an understanding of how PHP works as a platform to know what the best practices are, and where problems are likely to occur. This, far more then any other single thing helps speed and ease development in my experience.

Bill Hilf:
There are also models in the open-source community that back up your statement. If you look at some of the dual licensing models taken on by products such as MySQL, or the dual licensing model of OpenOffice.org, you can see a similar model where there's an open-source project as well as a commercial front-end project to find a business model around that open-source technology.

At least with MySQL my understanding is that support for the OSS version is also a large reveiew stream for them.

But it does bring up the question of differences between open source and open standards, and this is one of the most common misperceptions that I face, when talking with folks in the marketplace and the open-source community. Fundamentally, an open standard is a collection of specifications and reference models that can be applied to interfaces and technologies that allow hardware and software to interoperate and communicate and exchange data. You can think about an open standard much like the VCR business where you have an open standard that allows different technologies to work together in a marketplace, but you can still have a competitive environment, with competitive implementations. Open source is a model driven mainly by a licensing model and a development model. The two, between open standards and open source, are quite different.

True, and if Microsoft spent half the time they spend talking about open standards actually implementing them (or even documenting their own ones) there'd be a lot less complaints out there (Office docs, web standards [HTML, CSS]).

We believe the way to integrate software, and the way to get software to work in a heterogeneous environment, is through promoting open standards that can allow companies like Microsoft, IBM, Oracle, Sun, as well as other types of software and other types of technologies to work together and still co-exist in a competitive environment. It brings up another interesting misperception that we see a lot when we do this comparative analysis between Unix and Linux, and often we hear customers and folks in the marketplace talk about that Linux is Unix. And for those of you who might have been around in the early nineties, which was part of a large boom in the network operating system and really the server operating system model, where we saw in early '91 the emergence of commercial Unixes such as Solaris 1.0, Unixware. In '93, you saw the emergence of NT 3.1 and then the first commercial Linux distribution, Red Hat version 1 [appeared] in late '95; those early years in the nineties really saw the growth of Linux and server operating systems. And you have to take a look, Martin, at the ecosystems around those technologies; and if you take a look at where Linux is today versus where Windows is today versus where Unix is today you have to take a look not just at the capabilities of the operating system, but the ecosystems that you need to for that operating system to survive. Applications, hardware devices, support services, trained professionals, all around that environment are the attributes that allow an operating system and a technology to thrive, because at the end of the day it's really not about a kernel. It's about providing a system to provide a business value to a customer. And a kernel is just one part of that stack of that ecosystem that needs to be in place for that to happen.

This comment started with "that Linux is Unix" and ended on an entirly different tangent. My view is that Linux IS a Unix, if not a genetic one, same with Mac OS X.

Martin Taylor:
Well yeah, Bill, that's interesting and I know you and I spend quite an amount of time talking to customers, but let me ask you a couple of things that I know come up frequently on some other myths. One thing I frequently get asked about is this notion that, hey, how in the world can Microsoft, with only a limited number of developers, truly build anything better, [with] more quality, [that's] more stable, more secure, than something like Linux that truly has millions of people working on it every day. So that's one thing that I frequently get asked, and then the other thing I get asked is kind of, hey Linux is Linux is Linux and so whether it's SuSE or Red Hat or whatever the distribution, we're at a slight disadvantage because it's such a broad space with many different distributions that all exactly act and operate the same way. So maybe if you could give some light on those two scenarios.

Bill Hilf:
Yeah, those are two fun areas to talk about and, having been involved in the open-source community for over a decade, I can talk about the development model of the open-source space from a variety of perspectives, both from the application perspective as well as running businesses using Linux and open-source software. The idea that there are thousands of developers working around the world on a particular piece of technology, although it's exciting to think about, when you take a closer look at particularly Linux, it follows a very Darwinian model, where you have a small group of people doing the bulk of the architecture work in the kernel itself, and a larger periphery of people who are testing the software.

I would assume that's true of most large scale software products. From a recent issue of the Linux Journal the Linux kernel has approx 1000 regular contributors, about 100 of whom are paid to work on the kernel, but there are many more who write drivers or other low-level code. But I personally know about 20-30 people in the local area who will download the latest kernel on their machines and play with it and try to break it.

So, fundamentally, you'll see maybe between 100 to 200 developers working on Linux at any given point in time. There might be a larger group that's helping test that, but the real work is within a small group and there's nothing really different there than many other software projects, commercial and open.

As I said above, a little more then that in my view but otherwise I agree

The other myth you talked about isn't just Linux, and this is one my favorite subjects to talk about. We do get this a lot from customers, customers coming to us saying we're thinking about running Linux or my competitor is running Linux, shouldn't I run it as well, and Linux means nothing more than just saying the word Unix or Windows.

My view of this is that if you really have customers saying this they're unsure, this is not about the people doing proper analysis of the options

It's a really abstract term, and really what they're talking about is like a Linux distribution such as Red Hat or SUSE or Debian or Mandrake. They're talking about a specific distribution and a specific version of that distribution. So if you take a look at, let's say, systems management tools, tools that ship with a commercial Linux distribution like the Red Hat Network or YaST, which is the system management tool for SuSE, or you look at Mandrake or even the other commercial distributions, and they've all implemented a different systems management tool. Now part of the reason why they've done that is to create a value proposition, a value differentiator between themselves and the other people in their competitive space.

But one of the main reasons was that at the time they were introduced (mid 90's) there was no existing systems management tools for Linux out there.

Because at the end of the day the distribution is a collection of open source software, so they have to find areas to innovate and provide value around those commodity pieces that they're gathering out of open-source space. So one of the challenges there is it becomes something different, based on the distribution. So if you look at a systems management tool like Red Hat Network, it was designed and will only work with a Red Hat system. You can't use Red Hat Network to manage, for example, your Mandrake servers or your SuSE servers. It's not necessarily good or bad, it's just something that's important that we clarify with customers in the marketplace that these are very much turning into commercial operating systems, right, where they have a certain type of value-add and a value differentiator to be brought into the market. So the idea that it's "just Linux" is very much a misperception. Linux is nothing more than a kernel. And it's a kernel to do two things very simply; it's one to implement some POSIX compliance and to run on somewhat industry-standard hardware. That's all it does, and it's not designed to be everything open source. So it's an important perception that we're trying to change in the marketplace, help people understand what it is.

I'm not sure how to respond to that last statement, I disagree that Linux is as simple as they make out, but I'm not sure exactly how to extend on that. Another thing here is they're going the RMS route by saying Linux is a kernel, it is, but in that context it refers to a distribution

Martin Taylor:
And when you think about the work that we're doing with regard to interoperability, I would say this is one of those things, again, it's not a matter of it being good or bad or better or worse, it's just a matter of saying hey, there's a different approach to the path that we're on. Microsoft has taken an approach that says we really want to, let's say, extend the value of Windows, extend the value of our operating system and move things down into, let's say, what would be classified as our core distribution; whereas, in the open source model, things are more plug and play, so to speak, and that does have some level of interoperability and integration challenges. As a matter of fact, many customers talk to me all the time about interoperability. It is one of the biggest concerns of most IT professionals around the world, which is, "how do I get the various things that we have deployed to work together?" And that could be at the network layer, at the application layer, at the data layer, at the management layer, you know, different places across their environment. Microsoft spends a lot of time to ensure that our products both work well together as well as they work well in heterogeneous environments. We talked "here in the U.S." to about 800 IT professionals to ask them some deep questions around interoperability. What we found was about 72% of them felt that Microsoft is the top of all vendors in supporting their major interoperability concerns.

I believe that that is biased simply by sample choice, if I had more info about the specific survey I might retract this.

Some of the big things that they have on their plate [are related to] that call it that data and application level interoperability and the proof points there are Web Services and XML and how applications talk together. The other thing they're looking for is continual ways to manage heterogeneous environment, and I'm really excited about the work that we've with done with SMS [System Management Server] and Vintela to allow us to work well across Windows devices as well as Unix and Linux devices. So that's another proof point where we both want to make sure that it's easy to manage and operate our stuff, so to speak, within our stack as well as Microsoft and Windows applications on top of our stack, but then also working across with heterogeneous technologies as well.

Bill Hilf:
I'd like to jump in and comment on that. Having run large Linux environments before, and now I run a lab here in Redmond with about 200 servers, we run over 40 different types of Linux distribution. We also run a lot of Unix and a lot of Windows, of course. We use SMS Vintela [Management Extensions] here to manage that heterogeneous system. It's a good example of how we've taken a Microsoft technology, SMS, and built it in a way that's open and allowing our partners, such as Vintela, which is a Microsoft partner, to build on top of our software stack to enable management in a heterogeneous environment. So it's a nice proof point of both: how do you do interoperability across different types of systems, but also how do you build software that other people can build upon.

Martin Taylor:
As I talk to customers around the world, one thing that becomes increasingly true is, as they deploy Linux or look at Linux as an alternative in their environment, they really want to deploy what we call a commercialized distribution. I always ask the question of customers and yes, there's always a free version, there's Debian, there's Gentoo, there's different distributions that they can pull down and use in a different environment, but when you really want to deploy it in a mission-critical way, when you really want to have something that's broader from an infrastructure perspective, they want something that has support, they want something that has some level of, lets' say, consistency [from] platform to platform, and we're seeing more and more customers say we're really going to run a commercialized distribution. Bill, what are some of the conversations you have with customers and some of your insight on some of the challenges as people look to deploy a commercialized distribution but still want this level of flexibility that they try to gather from using open-source technologies?

Here's the one that I really disagree with, I see far too much Debian usage for this to be true, and Debian has support. The only two distributions I would seriously consider using in business are Debian and RedHat for the simple reason that I personally know developers from both (again in the local area) who will either support for free or for contract on short notice. I cannot get this with Microsoft (Although I believe I could with SGI)

Bill Hilf:
Understanding what a distribution is, is the first step and again the context here is a commercial Linux distribution. We're not talking about something like a Debian or any of the other purely free distributions. Commercial distribution is a collection of open-source software packages, and to give you a sense of what that might be, if I'm to say a package is something like an RPM, which would be somewhat like a component of software, there's 1,000, 1400, 1500 packages on any of the more recent commercial Linux distributions. Now when you think about what that is, those all come from various independent very loosely coupled software projects. Some of these could be a couple of people working on the project, some of it could be many more such as a LANCE kernel and there could be many more people working on that project. Now, distributions such as Red Hat or SuSE, they gather those pieces together, they run them through a variety of tests to make sure they work together, they put them on a CD and they sell to a customer. And what they're really selling is support, because again it's not their software, it's software from the open-source space. Although this may seem sort of obvious, a lot of customers don't understand this difference. They think that maybe Red Hat is creating Linux or SuSE is creating Linux, they are contributing in the open-source space, but the collection of software packages is not their own, it comes from the community. So what you're buying then, at the end, is support. And when you think about what that support means, you think about everything from calling 1-800-my vendor to get support all the way down to low-level support if there's a bug or security patch or a low level problem with the software. One of the challenges with the Linux distribution model and support in regards to that is really what accountability does that commercial Linux distributor have to that software.

So to give you an example, like I said I've run a lot of Linux shops in the past, I run a lot of commercial Linux here. If we have a particular problem in a certain piece of software, anything from let's say from a Kerberos library to Apache to Samba to any other application that might be on that distribution when we go through that chain of support with our commercial Linux distributor, there is a gap between what they're able to supply and what they have to go back to the open source community to get an answer for to get it resolved. In many cases the response is we need to stick with the version that's available at the time that we purchased that distribution, so for example if I'm running Apache 1.3 on my Red Hat Enterprise server, although I may want Apache 2.0 because it might have new features or it might have some new capabilities, I'm outside of my support model now with Red Hat. This is just an example.

And a really stupid one, because what you're doing is installing third-party software (even though a different version is included with the distribution) and asking them to support it. No OS vendor would support that normally, the better ones will if you throw enough money at them). On the other hand will they support your problem with MySQL if you've installed a new Apache, probably.

Now the challenge here really is one of the reasons that people go to open source software is this wide variety of software, this community of software development, and also not just the selection of those pieces of software, but the amount of change and the effort of development and the release early release often mentality that the stuff is being iterated over very quickly. When you go with a commercial Linux distribution, you sacrifice some of that flexibility because you are sticking with the packages that they've selected and the supported versions of those packages that they will support you with at that given point in time. So it sort of creates a gate or a wall rather between you and some of that flexibility. It's not necessarily a bad thing, it's just something that customers should have a good visibility into to understand when they pay $1,000 or whatever it may be for support, what is it that they're actually getting support for and what sacrifices do they make when they buy that support.

I don't see the issue here, you're paying for support of The Distribution, not the distribution and whatever random stuff you've installed (which is exactly the reason I can never see Gentoo being used in a business critical situation).

Martin Taylor:
I think, Bill, that's exactly the decision criteria that customers need to understand. And I'm hearing more and more customers begin to hit that fork in the road saying, "Wow, I want something that's fully supported; however, I also want this broad flexibility of being able to do different things with my distribution." They're beginning to realize now that you can't have both of those worlds together, necessarily.

You can't have that with most vendors. A recurring theme in rants is buying two peices of software from a vendor (from a real example, MS SQL Server and MS Access) and calling support because they won't talk to each other and still getting a run-around between the MS SQL and the Access departments.

You do have to either move more towards the side of fully flexible, open-source projects, which means you don't have that quote unquote award-winning vendor-level support, or you have more of a packaged software, commercialized software scenario which is a bit more like in the lines of how Microsoft distributes software that can be fully supported in a broad-based way.

Bill Hilf:
What we often tell customers is to take a prospective look at it from a risk perspective. What risk are you open to taking in this model, are there things that you might be sacrificing when you make this decision, and making sure you understand all of those points along that risk line of what you're willing to do. And support is just one vector of that risk model.


The next few sections are about indemnification, I just want to say up front that I think the whole indemnification issue is rediculous, especially as (to the best of my knowledge) none have ever been tested in a real trial. Add to that that MS gave ~ $30M to SCO just after they started their IBM lawsuit and that puts MS on shaky ground.

Martin Taylor:
And another area on that risk model that comes up is a notion of indemnification. It's a notion of how does the vendor stand behind the software that they're distributing and/or that we're buying from them. As I travel around the world talking to customers, I would say there's broad degrees of both, let's say, education and awareness with regards to indemnification and IP protection, as well as different levels of concern. So, as we just spoke about, as you plug and play different things into your Linux open-source distribution, it makes it incredibly difficult for a vendor to give you a level of IP indemnification as they're not completely sure what code is there and what software is there. Microsoft, on the other hand, works very hard to provide our customers the most comprehensive IP protection policy that really exists with regards to Windows against Linux and open-source alternatives. Microsoft recently has been very public with the fact that we support our customers, all of our customers, be it volume licensing customers or end users, we give them unlimited cap and we pay all the damages should anything arise around patent, copyright, trade secret or trademark claims, which are the four most common scenarios that arise when people have IP issues. It's interesting because when you start comparing Windows and Linux against each other there are different areas of risk that people do need to assess, and we're finding more and more that this is one that many IT professionals are putting on, let's say, their criteria list, saying, "if I deploy this technology in a mission-critical way, how comfortable am I with the vendor support behind me?" I think it is important that people begin to look at what Microsoft offers, versus what many of the major Linux distributors offer. IBM, to date, has never indicated that it offers any level of indemnification for Linux or their customers in any broad-based way. HP, they have a much more limited view of indemnification today only offering support for SCO or SCO claims against their HP customers. Novell only offers protection for copyright disputes. They don't protect against patent claims or trade-secret claims, and with Novell you have to have a service agreement, you have to have a version of their maintenance and you have to have had that as of January 2004. Red Hat has an intellectual property warranty, where they'll agree to either do a workaround, so to speak, for the infringing code and they've got a small legal fund that they've created that they'll help customers, but they've not exactly said how comprehensive that is. And so again it's not a matter of saying entirely things are better or worse, it's a matter of people really understanding: "what am I getting when I'm buying a commercialized software operating system like Windows?" There's a level of flexibility I might be trading off; however I get a level of support and I get a level of, let's say, risk management with regards to IP policy and IP protection somewhat different than the open-source model that gives you broad flexibility, but not quite the comprehensive support if you take advantage of that flexibility, and definitely not the level of risk management and indemnification protection the way Microsoft does.

Bill Hilf:
And I know, Martin, in the past that we've looked at indemnification and one thing I hear a lot from customers is: "does indemnification matter to me?" And maybe give some examples of what Microsoft has done in the past or the market's done in the past around indemnification.

Martin Taylor:
Yeah, that's a great point. There are a couple of things that we see. First of all, if you take a look at Intertrust, the company that filed suit against Microsoft for patent infringement, Microsoft wrote a check for $440 million and our customers did not have to do anything in their implementation of Microsoft technology nor feel the pain, let's just say, of that situation. We're currently on appeal with a company called Eolis. Our customers know that should we lose that suit, they'll have nothing to worry about, because Microsoft fully stands behind them. But it's not just a Microsoft issue. Kodak recently won a suit against Sun on JDS and a patent infringement as well, and Sun was going to pay $92 million to Kodak to protect their customers because they do indemnify their customers from that perspective. And so it's very important that people understand that this is not just a Microsoft issue. And people find ways to quantify that risk assessment when they're deploying technology solutions.

What about the Timeline case where MS left developers open to possible patent claims? Some info is available from this Groklaw post

Bill Hilf:
Okay, Martin, one thing that I want to talk about today is, having run IT environments in the past, I really believe as a technologist and as an IT manager that you choose a technology to solve a given problem. There is no one magic bullet technology that solves all problems, and it really is guided around the value you're trying to provide and the problem you're trying to solve. In that experience, what I've found is it goes much farther beyond just looking up the list price of a piece of software on a Web site or the list price of a piece of hardware on a Web site. There are many more factors involved there that you have to take into account. So I'd like to talk a little bit about the total cost of ownership [TCO] of software and really what the value story here is when we're looking at Linux and Windows.

That's true, and that's something that a lot of zealots on either side fail to take into account. I was first impressed with IBM Global Services when I bought an ex-lease HP server which still had the IBM GS sticker on it.


Get the facts stuff below. All I want to say to MS is since most of the studies have had serious rebuttles from either the various Linux companies themselves or various industry associations (I'm thinking Australia's own OSIA) and OSS celebrities (Bruce Perens does a lot of work on this) why don't MS rebut the rebuttles, show us why we're wrong and their studies are right.

Martin Taylor:
I would say by far the biggest reason why people are looking at Linux and OS as alternatives is for better TCO. However, when you get a little bit deeper and have a richer discussion in terms of, "have you done any analysis? Have you read any third-party research that has kind of pointed you in this direction?" I'm surprised, actually, at how many customers truly have not rolled their sleeves up to analyze and evaluate total cost of ownership. And so I've spent quite a lot of time this year with my team, working around the world to go quantify a variety of scenarios to help people understand total cost of ownership. Now, I should say that I know, we cannot give you better TCO in every single scenario. There will be a set of things because of maybe Unix legacy, because of your development skills, because of whatever the case might be, however, I am confident that in many, many, many scenarios we've got a great value proposition to offer you with regards to total cost of ownership. So let me go through a couple of things. The first things that come up are just simple workloads. When you take a look at Windows servers and how they're deployed, not so much as a multipurpose server, but maybe a single-purpose file server or Web server or security or networking or whatever the case is. IDC did a study with Windows 2000 compared to Red Hat, and they found that Microsoft Windows, over a five-year period, offered anywhere from 11 to 22% greater TCO on four out of five of those major workloads. The one that we didn't win was the Web work load, but that was with IIS 5.0 and I'm pretty confident with IIS 6.0 we've gotten some of the provisioning issues resolved and I think we can have a good TCO discussion with customers even with the Web work load. The next thing that came up was okay, well we understand the single-purpose workload scenarios, but what about a more rich application-centered environment? And so, last year, Giga took a look at building an interactive human resources portal, and they looked at Microsoft and .NET and our stack and compared that to Linux and Java and BEA and another stack over there to build this kind of transaction-based portal. What they found was Microsoft offered anywhere from a 25 to 28% TCO advantage over a four-year period across a variety of elements, both maintenance, administration, training, lifecycle support and some of those key rich elements of total cost of ownership. Now we commissioned both of those studies because there was not a lot of, let's say, data available for customers last year. Over the last six to eight months, we've seen more data come out on a non-commission basis where Forrester has talked to quite a few companies; in looking at five different companies that they interviewed, they found that Linux would not offer any TCO savings for customers. As a matter of fact they could get anywhere from 5 to 20% savings by either upgrading Windows or continuing on with Windows. And then Yankee talked to quite a few IT professionals here in the U.S. and they found that there were no significant total-cost-of-ownership savings by migrating off of a Windows environment. And so, the major thrust of this or the big point that I want to make sure people understand is that it is important to do some analysis. It is important to talk to other companies around the world that have also done their deep TCO analyses to really understand what are the savings that you think you might get or that you actually will get by either upgrading your Windows environment or moving to a Linux environment, or from Unix, looking at Linux [or] Windows. One other thing that comes up is this cost of acquisition, and yes, Bill, as you said you do have to look deeper than just the list price, so to speak, for the software. However I think it is important that as we talked about a few sections ago, it is more of a commercialized environment where people are wanting to deploy Linux. And so in doing so they're looking at Red Hat and they're looking at Novell SuSE. And when they're looking at those environments, there is a yearly fee associated with those products to get support and to get security patches. Bearing Point recently talked to both Red Hat [and] Novell, as well as Microsoft, to get a price quote. If they had 522 servers and 5,279 clients to understand what would it cost me over a five-year period just in, let's say, software acquisition. In our case they're buying the bits as well as they're paying for support and security patches and, of course, in the Linux distributions' case they're just paying for support and the security agreements. And in those scenarios, Microsoft was found to be 76% less expensive than Red Hat, if you wanted all your servers covered with 24-by-7 support. So again, it's very important that people go a little bit deeper and really understand both their acquisition cost as well as their broader total cost of ownership when deploying Microsoft solutions as well as when deploying Linux solutions.

And I assume that in this case greater really meant better or smaller, not more expensive which is what comes through.

Bill Hilf:
Yeah, to give you a couple examples of where some of those costs can manifest, having run Linux environments in the past, it is looking at the total picture. And like I said, there are some situations where having deep control of an operating system is the right thing for a customer, and often I talk with customers about that, and I've met some customers that say yeah, we are an operating system company, and for them they should have that deep control. However, there are many customers that don't need to be an operating system company and often that cost of Linux support returns back into the customer environment, it turns into your own IT staff, your own developers sometimes and a previous organization that I ran with a server farm of about 400 Linux servers, after a period of time I ended up having to employ quite a few deep operating system-level engineers to support that environment. Was that my business, to be an operating system developer? No, it wasn't. It was to sell commercial goods online, just like some customers make golf clubs and sell those. They're not in the business to write operating systems. So, it's really important to understand what business are you in and where will that cost end up manifesting itself over time.

Why did you do that? It could have been for two reasons: more performance, or more features, and I don't believe that either would require several "deep OS engineers", possibly one to fix bugs encountered in operation. But if, as I believe, you're talking about systems programmers, at 400 servers you SHOULD be having a few to automate things, or heavily customise systems. These days that's part of my job. It's to do the things that the vendor can't, won't, or won't yet do, and often for a much cheaper cost.

Martin Taylor:
One thing that customers ask me sometimes is well, okay, we see the studies and we've talked to companies like Equifax that can show that they can get 13% greater TCO than over deploying a Linux solution or talk to BET.com and show that they can get 30% greater TCO over a Linux solution, and they want to know why. Why is it that Microsoft feels so comfortable and so strong in our TCO conversation? And I think it's important that people understand that it's truly because of our approach of integration. And so, yes, it has something to do with the ecosystem, something to do with [the fact that] there's just a lot more people out there that can support a Windows environment and so there's some economies of scale to be had with looking at how you're going to get your environment supported and maintained. There's migration cost, because many customers might be on Microsoft [products] today, and all those factor in; however, the most important reason why we feel so comfortable in our TCO discussions is because of our approach of integration, the fact that, in Windows server, the .NET Framework, and ASP.NET, and our message queue engine, all these things are deeply integrated into Windows server. It essentially reduces the time an IT professional needs to take in ensuring an environment is working well, so then they can build on top of it. And that's a direct contrast to the open-source model that really shifts that burden, so to speak, onto the IT professional and/or a consulting organization to, let's say, stabilize and design an environment to then begin building solutions on top of.

Bill Hilf:
Yeah, I want to give you a great example just from today actually, when looking at how we leverage that integration, you talked about some pieces, today I was looking at something called the Exchange Best Practices Analyzer. It's a new tool kit from the Exchange team that allows a mail administrator, an Exchange administrator, to run this tool against his Exchange and Active Directory environment, get a deep understanding of the configuration, and then make recommendations based on that. Harvesting that type of best practices, bringing that back into a tool and then bringing that to the customer allows the Exchange admin to now have the experience of not only the Exchange team, but also all of our customers brought into their environment so they can make those changes. As I was looking at the tool, I was asking some of my guys how would we do this with Sendmail or qmail and there was a lot of laughs and giggles because at the end of the day it's probably using a search engine and trying to find the right newsgroup to see what other best practices might be out there. So what could be a six-hour [or] multi-day task, configuring Sendmail or finding a best practice for sending mail is now something we're putting into this type of tool. Those examples really show what we're trying to do with moving that burden, that physical labor cost, that physical time cost in your IT environment, and putting that into the software.

Having actually used this tool on a production Exchange server I can faithfully say that it's crap. It doesn't ask any questions of the user as to how is mail routed in your organisation, how fast is your internet connection, the sort of thing where tweaking is important. Every suggestion should either have been done by default in the config, or in the post installation scripts. To do it with sendmail would be hard, but that's the nature of sendmail config files. For something like postfix or exim they both (Haven't used exim recently but I believe this is still true, all my production MTA's are postfix) use sensible defaults, therefor "best practises" is a config file with the minimum number of entries required to work, and those that may change based on your network (eg, number of simultanious SMTP sessions)

Martin Taylor:
And I think that kind of leads us to another conversation where we really understand the difference between, let's say, the Microsoft model and the open-source model around testing and around just kind of how you test things and what's the work required to let's say "ensure some level of stability into the marketplace," and I know that you spent a lot of time actually understanding both how Microsoft tests our software as well as how the open-source model works, and maybe you can share some insight on some of that.

Bill Hilf:
Let me give you one of my first experiences with one of the first pieces of software I contributed to the open-source community years ago. It was a small patch to an Apache module and I quickly got some feedback from some guys on the BOS team who were working on BOS said that I broke their Apache on BOS. And I saw sort of the value of the community development, but the real question I had was "what other things did I break?" because no one decided to e-mail me back about if I broke some other module or some other system. And it got me really thinking about this is really an arbitrary or opt-in type of model. Often you hear this called scratch your own itch. And there's some value there, but the concern from a customer perspective this isn't from a developer now, from a customer perspective is what if someone hasn't opted in to test my particular configuration? "What if someone hasn't tested my particular device talking to my particular storage area network running a particular version of the software?" That will eventually fall back to them if that hasn't been tested by the open source community, so although there is this opt-in, scratch-your-own-itch mentality, that doesn't mean it's cohesive. One thing that we do here, and I have spent a lot of time understanding how we test our products, is we provide not so much the specifics of the testing, but the methodology, the process and the organization to do rigorous testing, like plugging in our operating environment against a wall full of different devices, be it printers or different devices that might exist in a customer environment, and, more importantly, doing full stack or what we call scenario testing. So taking Active Directory, Exchange, SQL Server and testing that against a variety of different both hardware and software profiles. And it's that methodology and process that's really the differentiator here.

Firstly I believe that BOS means BeOS. And considering the problems that I've had with some MS software (I'm thinking here of MS SBS 2003 with 5 different pressings of the disks having an increadible bug in the installer that every machine I could get my hands on hit) I believe that no vendor ever tests enough configurations, nor could they. As for the specific problem, some developers on an oddball platform had some problems with a bug fix you put in, without any reference to the specific situation I'd say that they likely just fixed it and let you know as a courtesy, something which happens to large development teams following any model.

Martin Taylor:
I think, as we continue on this discussion, the other thing that quickly comes behind that is now security; okay, so we understand some of the differences from a model and architecture in terms of how you test things, but then security is also another issue where I think we all would agree that a certain level of rigor in testing can allow us to, of course, catch things and catch holes or whatever the case is before we distribute software out. Obviously, Microsoft is incredibly focused on security. Hopefully some you out there have listened to one of the previous TechNet radio broadcasts, which spent time on security, and then the second one, which spent time on patch management. If you haven't, they're both available on Microsoft.com/technet/radio.

And so it is important, though, that people understand that this will continue to be a focus for Microsoft. I frequently get asked, "well, okay, help me understand compare and contrast Microsoft versus Linux on this avenue of security." So I'd like to just kind of tee up a couple of issues and then, Bill, I'd love some of your insight, as well.

First and foremost, I think people are beginning to understand that this is an industry-wide problem. People are beginning to see a similar number of vulnerability disclosures for different open-source projects. Obviously there's a higher degree of pain with regards to the Microsoft environment, because we have more surface area in their environment and we fully understand that and that's why we're making the investments that we're making to both protect our customers as well as give them a roadmap of our security architecture and the products that we have coming out, [such as] the recently released XP SP2, as well as our server roadmap and other things that we're doing.

However I do think it's important that people at least have some slight basis of comparison when looking at the two environments. Forrester recently commissioned their own study to understand three different areas. One, who has the most security vulnerabilities; two, how frequently are they fixed; how long does it take to fix them and then, three, who fixes all of them? And they've categorized this around something called "days of risk." And they've assessed that as, when there's a public or non-publicly disclosed vulnerability, how long did it take the vendor to distribute a patch out to the marketplace for that? And it was some very insightful data, again, even though Microsoft gets a pretty bad rap around security, in the Forester report they found that Microsoft averaged 25 days between disclosure and release of a fix and it was the lowest of all the other platform maintainers that they evaluated. Much lower than Red Hat, lower than Debian, and lower than Novell SuSE. We were the only ones that fixed every single one of the disclosures that were there, and we had 25 fewer what they call high severity flaws than the other vendors. And so again this is definitely not a scenario where we want to say "hey look how much better we are than Linux," it is a scenario that we say "hey, there's a set of data out there that will help you at least compare the two from a snapshot in time, but the most important thing is to make sure you understand how to design for security, how to build for security and what the security roadmap is from the vendor that you're working with."

The study referenced here has been ridiculed elsewhere. But take a look at Secunia's site and see the comparison between Windows XP Pro and Red Hat 9, XP with all patches has 21 flaws rated up to "Highly Critial", whereas Red Hat 9 has 1 rated Not Critical (This is the only unpatched flaw from RH 9 [Which is now unsupported] on, including the community supported Fedora Core). Once you consider how many of those are for packages that have no equal in Windows (or even in Microsoft's product line) this disctinction is even greater. Especially once you add things like games which may have possible race conditions on their high-score files, something which OSS security teams go out and fix whereas on any other system they'd just be ignored as irrellevent.

Bill Hilf:
You hit an important point there about designing and building for security. I would say one of the most important initiatives here at Microsoft is not just how we patch security problems, but how we write secure code. And at the end of the day, that's how you solve this industry-wide problem of security: you design and build and test more secure code. An important comparative point here is: when you think about writing secure code, as a software developer, writing code is 1% writing code, 99% debugging and testing. So when we think about how we test our code, it's the mundane or non-sexy work that we have to really spend time on. So we do have to pay people who will test our code against four-year-old USB device drivers. And that sort of attraction model really doesn't exist in the open-source space. People aren't really signing up saying I want to go test four-year-old USB device drivers. It's just that not exciting and interesting to do. You just need one or two vulnerabilities to end the day, as we know, so if you think about going forward, planning your IT strategy four, five, ten years down the road, you have to think about the process and methodology of how does this software that I'm buying and bringing into my environment, what is the process and methodology that it's been both developed, built and tested against, because it just could be that one, that four-year-old USB device driver, that no one opted in to decide to test against or to look at for a security audit.

I don't believe that's true, what about all the people who run linux and deploy it, are you saying that they don't test it on a test network just like an MS solution?

Martin Taylor?:
And so, at the end of the day, on both TCO and security and even testing as a piece underneath both of those, there's a fundamental theme which basically says how much burden is on the vendor, in this case of course Windows, versus how much burden is on me, you know, i.e., the IT professional and/or a consulting group to do this level of testing, to do this level of integration, to do this level of, let's say, forward thinking and planning to build an environment that we can then really design solutions on top of. We feel great about our architecture, we feel great about the resources we have in place to do this testing and I think we feel comfortable that we know exactly the priority security has both within Microsoft and within our customers and we're working hard to solve their pains in those areas.

Bill, one thing that I think customers are beginning to realize especially now as they look at Linux and open-source alternatives is just to say wow, when I'm writing a check to Microsoft for Windows Server or a Microsoft technology, yes I'm getting the bits and yes I'm getting a level of IP protection and indemnification, but I'm also buying into or taking advantage of a broader ecosystem, and so as someone who builds a lab with many Linux servers but also understands the Windows Server environment as well, what are some of your thoughts and discussion with customers on the ecosystem?

And that's just marketing junk. If this actually was off the cuff with just a few bullet points of ideas it would have been much more intresting, but it's too scripted and obviously went through PR first.

Bill Hilf:
Yeah, this is an important subject, as I said earlier nobody runs a kernel. No one runs just a kernel, they run a stack of software and hardware and devices to deliver some sort of business value. And really that's the end goal here, to look at what are the things that I need to bring into my environment to deliver that value. And, just like any sort of ecological system, you need a variety of pieces together, like you need air and water for organic life to grow, you need different pieces to bring that value to your business or to your customer. So, when we look at the ecosystem, we do quite a bit of quantification of the ecosystems, both Linux and Windows ecosystems. We look at not just is the software there, but we look at the support for things like hardware devices, PC systems, server systems, devices, printers, graphics cards as well as the software ecosystem above that, what sort of applications line-of-business applications exist for that ecosystem. And, beyond, that we look at what sort of environment exists for professionals and services and support organizations to maintain grow and help that ecosystem thrive. All of those pieces together are what end up driving the business value, not one piece of software, one piece of hardware. So, when we quantify, we look at the ecosystem not just in terms of "can Linux run on this" or "can Windows run on that," we look at it in terms of "is there actually support for Linux on a given piece of hardware, or is there support of this application on a given piece or a distribution?"

I disagree, we should be saying "I need to run X, what is the best combination of hardware, OS and software to do X in the cheapest, highest performing way. This is why at one of my clients we program in perl on *nix, run our cluster and number crunchers on linux, and do our output runs on Windows.

So, when we look at what commercial Linux distributions say they support from a hardware perspective, we see, from the server side, maybe up to the hundreds, say, 150 different individual servers. So this would be a very specific server model. And, when I compare that against Windows Server 2003, it's in the 5,000, 6,000 range for just server systems. If I look at something like Windows XP, there are over 40,000 individual systems that are certified. That means they're logoed for Windows.

Which means they work OK for one version of windows, the laptop that I'm typing this on runs Windows 98SE almost exclusivly (alternativly Debian Unstable with a 2.6 kernel), because that's the only version of windows that runs all the hardware on it without major problems. Linux had some problems, but the 2.6 kernel and commenting out a few keyboard checks fixed most of them. This laptop came with a windows logo sticker for 98 and NT.

That's one angle on it. The other angle is to look at the ISV ecosystem, and we have a large ISV ecosystem that runs on Windows and we take a look at—if I need to buy an application, if I need to buy an application from Microsoft or Oracle or SAP or whoever—it may be to run on my particular system, I want it to be supported. I want, at the end of the day, to be able to call somebody and have them resolve a problem or tell me how to configure my system. And that, not just the size of the ecosystem but also the growth rate of ecosystems, we track very closely, so we think about literally tens of thousands of applications that are certified for Windows today and we look at what's certified for Linux today. Again, you even heard me say the word Linux, and that's not correct, it's really what is certified for Red Hat Enterprise Linux version 3 or SuSE Linux Enterprise Server version 9. When you take that look at it and say "what is that ecosystem of applications available to me," it's important, from an IT planning strategy, to say "in five years from now where will that ecosystem be?" I think we can all go back in the press or the market a couple of years ago and just look at the proclaimed growth of the ecosystem for Linux and see where it's at today and make our own judgment of where we might be in two or three years. So it's an important thing to consider, again, it's not about the kernel, it's not about Linux, it's about that ecosystem and the growth around that environment.

This stuck out like a sore thumb to me. The *ecosystem* and *growth* around that environment. Windows doesn't have the growth it used to. We have a broader, top-to-bottom ecosystem - though it is not yet as deep as Microsoft's. -- Jeff Waugh (Canonical)

Further to Jeff's comment I feel that Windows never had the community aspect that OSS has, and recently Microsoft has realised this which has resulted in the creation of things like MSDN channel 9. While it is working it really doesn't compare to the OSS world where it's possible (and I've done just this) to see Linus Torvalds at a conference, without having met him before, to go up with a (valid, diagnosed) bug/issue and have him say comment the line that says X in function Y in file Z (this was for a peice of really dodgy hardware, the laptop that's mentioned above). In Windows, or in almost any proprietry system this just isn't possible.

Martin Taylor:
I think one other thing that people also want to extract value from when they're buying Microsoft is also a level of innovation... and a level of new scenario enablement. And I think sometimes we wrap ourselves up on innovation, being something so "front foot out there," leading edge like, you know, speech recognition or things like that, but I think there are smaller things, too. I remember the dialogue that you and I had, when you recently joined the company after spending most of your life in front of Unix and Linux servers, and how you found some of the little wizards and applets and tools that we allow you to configure servers with and how we've tried to take the work of people and turn them into bits and that level of innovation, I think, people expect to extract, as well. And I'd love to hear just some closing comments from you on really this notion of innovation and enablement on that notion.

Bill Hilf:
I think about it really in terms of you really hire software to do a job. You don't buy software, you hire it to do a job. If we were a warehouse management software company people wouldn't be buying our software just because we provided warehouse management. They'd be hiring us to ship a package out of a warehouse to a customer, so you have to think about any sort of technology investment in terms of what are you hiring it to do.

No, they'd be hiring you to automate as much of that process as possible, and make interaction with the non-automatable parts easy. Windows does some of this well, but too much poorly (Although it is improving with every version).

When I first took a look at some of the newer technologies, in Windows Server particularly, some of the things that struck me as innovative were some of the server management tools. The ability to take a Windows server and literally dynamically change it from a DHCP infrastructure server to a streaming media server, or more importantly, taking a file/print server and adding a variety of other services, maybe make it a domain controller, maybe also make it a Web server. The ability for a tool to automatically go out, find the right codec for my streaming media server or find the right configuration details and transform my server into something different or add a new service to it was compelling to me and innovative to me because, having come from the Unix and Linux environment, most of that work falls into the hands of an administrator or developer to piece that together, often to script that together in a certain way and then that responsibility or that subject matter expertise resides within the person. So as someone planning an IT environment, if I'm buying software, remember: you're going to buy commercial Linux distribution, you don't get it for free. If you buy a commercial Linux distribution, you should get some sort of capability out of the software, and it shouldn't just drive right down into hiring more people. At the end of the day, that's what we think. It's hiring software to do a job and that software should be proficient at doing that job and not require you to have more people to get the value out of your software, long term.

Dynamically changing server's roles, no, you certainly can't to several of the things on that list without rebooting the server in question which certainly isn't dynamic.

Martin Taylor:
Yeah, and I'm hearing more and more customers say that that's one of the biggest reasons why they've continued to use Microsoft and continued to use Windows Server, contrary to what you might read in the media, Windows Server is continuing to grow, and is a very healthy rich platform, both for IT professionals as well as ISVs, to build solutions on top of, and we also offer an incredible level of rich scenario enablement, scenarios around secure identity management, secure mobile access, communication collaboration, application platform, just a variety of different scenarios. So I'm very excited. I'm very excited about the feedback we get from customers about the richness and the benefits they see in Windows Server 2003, and the different products that run above that stack, so to speak. So, in closing, I really want to thank you for spending time with us to review these differences and similarities between Windows and the open-source alternatives. We hope that you found this information informative. All of the data that we talked about, the analysts' reports, the customer case studies, as well as ones that we did not talk about, all are up on www.getthefacts g-e-t-t-h-e-f-a-c-t-s.com. So, you can get the full details of the reports that we talked about and, again, many of the customer case studies. My e-mail address is MartinTa m-a-r-t-i-n-"t" as in tom, "a" as in apple @microsoft.com. I'd love to hear from you, as well, if you have other comments, or if your experience has been different than the ones that we're seeing with customers; we always love to hear more from our customers around the world.

Lastly I'd like to say that the security as well as the patch radio broadcasts that I spoke about are both available off of http://www.microsoft.com/technet/radio as well as a full transcript of this briefing that you just heard. Hopefully, you'll continue to tune in, but also continue to check Microsoft.com for future TechNet Radio broadcasts. Thanks again.