Category Archives: Uncategorized

Office365: Saas versus On-Premise TCO

I like the concept of Office365 because, unlike Google Docs, it appears (on paper, at least) to integrate well with your desktop Office applications. Users are familiar with Microsoft products, and like them (and they’re great products). As a small-site, looking to roll-out fresh implementations of both Exchange and Sharepoint, with less than 100 users, we’re probably a key market for Microsoft.

One of the selling points of Office365 (or any SaaS product) is very low cost of management. But are on premise costs really that high? My experience with Microsoft servers running on HP Proliant servers is that, simply, they never go wrong. Of course, I might just be lucky, and they could go wrong anytime now. But generally speaking, they sit in the corner and do their job.

For our 65 user head-office site, a basic Exchange implementation will cost £4k for the licences, £5k for the server (including maintenance), £1.2k for administrator training, £.17k for a couple of days of installation time and maybe £2k a year for disaster recovery, data backups and reseller support. So if we keep it for four years, we’re paying a total of £19.9k, which is £6.38 per user per month. By co-incidence, this is pretty much exactly the cost of basic Office365 (£6.5 per month).

However, with on-premise Exchange you have to add on the cost and hassle of setting up and managing remote users, connecting through their smartphones, their iPads and their home computers. This is a pain and something of a security nightmare. On the other hand, our site only has a basic 2MB ADSL internet connection. That means low bandwidth and, crucially, no SLA. So we’d have to upgrade our connection. A leased-line or Ethernet First Mile (EFM) will add around £7k a year . Now Office365 doesn’t add up.

If we had a leased-line already, I’d be tempted. But I’m happy to muddle along with on-premise servers and cheap ADSL internet connectivity. It means, if our ADSL is down, we can’t send or receive e-mails, but I’d be more confident of dealing with that than if we couldn’t access our Office365 portal at all.

I want to virtualise our entire server infrastructure with VMware Essentials Plus. If we didn’t have on-premise Exchange and Sharepoint, and instead used Office365 (or another hosted Exchange solution) it would be impossible to justify the expense, as there wouldn’t be enough on-premise servers left to justify the investment in a SAN. That’s part of the problem of looking at server virtualisation and Office365 as separate projects. In a small business, you really have to go all-in with one or the other to justify the expense. With a mix and match of on-premise servers and SaaS solutions you might think you’re getting the best of both worlds, but you could pay a heavy price.




Microsoft InTune

I’ve just finished a trial of the new Microsoft InTune, and liked it so much I’m about to buy it.

Microsoft say “Windows Intune simplifies and helps businesses manage and secure PCs using Windows cloud services and Windows 7. The Windows Intune cloud service delivers management and security capabilities through a single Web-based console so you can keep your computers and users operating at peak performance from anywhere.

Its £7 a month per computer. This makes it an expensive solution to use for all our desktops. But I’m loving it for our remote workers. Its only been out for a few months, so will hopefully improve and add features.

The only downside is I can’t buy it from my Microsoft Gold Partner. I have to buy it directly from Microsoft with a credit card. What a hassle! I believe you can use a Gold Partner if you have an Enterprise Agreement in place, but we’re not that kind of company. I hate dealing directly with Microsoft and I hate using credit cards


vShpere 5 licencing

I’ve been looking at the new (expensive!) licence fees for vShpere 5. Its annoying that there’s no way to upgrade from Essentials to Standard if you decide you do actually need more than 144gb memory. You’re forced into spending serious money on new licences.

Its almost worth going for Standard licences from the beginning. I think this would cost £7,688 versus the equivalent Essentials bundle for £4,337, including support and subscription fees. A bigger hit upfront, but more options to upgrade later.

Almost, but not quite.


Lenovo Thinkpad Edge 13

I’ve recently bought ten of these for some of our road warriors. We’re an HP shop, but I like the fact that these are both cheap and come with an embedded Gobi WWAN (wireless wide area network)  modem. This means we install 3G sim cards into a slot on the laptop (its behind the battery), rather than mess around with USB dongles. As well as being neater, its supposed to give a better signal, as it provides more power than a USB connection does. HP do similar models, but only at the higher end of their range, and I don’t need a high spec laptop for Excel sales reports, customer Powerpoint presentations, and E-mail.

Setting them up was generally dead easy. But I wanted to install Windows 7 Ultimate edition on three of the machines. This required me to do a clean install, as the upgrade didn’t appear to work. After installing, I downloaded Lenovo’s System Update software, and this downloaded and installed all the required drivers. This worked perfectly on two of the machines. But today I tried to do the third machine and it didn’t seem to install half the required software. This may be because the Lenovo driver download website is “under maintenance” at the moment. The main problem was getting the Intel graphics driver to install. I tried and tried and it wouldn’t work. In the end, I downloaded an earlier version of the driver from Dell’s website (of all places), and this worked. Downloading and installing drivers is one of the most frustating aspects of working with machines.


vShpere Storage Appliance

Just had this e-mail from VMWare:

Shared Storage Hardware Not Required

Introducing the vSphere Storage Appliance: The advanced features in vSphere Essentials Plus require shared storage capabilities. This used to mean having shared storage hardware in your environment—but no longer. Now you can turn your servers into shared storage.

This could be interesting seeing as we’re about to blow over ten grand on a SAN. Hold that purchase order! According to VMWare, we can replace the (literally) thousands of pounds we’ve been quoted to have an engineer configure our SAN with “a few mouse clicks“. I await further details and reviews.


Its all about the IOPS

I’d never heard of disk IOPS (Input/Output Operations Per Second) until I started researching what kind of SAN we need to by for our virtualisation project.

But I went to a seminar by an HP storage expert who pointed out that since the days when I started working in IT (late eighties), disk capacity has increased by an eye-watering 10,000 times but disk performance has only increased a measly fifty times. So squeezing out performance from your disks is A. Big. Deal. So it becomes necessary for an IT Manager like me to roll up his sleeves and learn about IOPS.

The HP P2000 SAN that I’m looking at comes in two versions, a 12 bay model supporting LFF disks (3.5 inch disks) and a 24 bay model supporting SFF disks (2.5 inch disks). They’re both about the same price. So the 24 bay looks more attractive, right? The problem is SFF disks only spin at 10k RPM whereas the LFF disks spin at a much faster 15k RPM.

My reseller has recommended against buying SFF disks because he says the performance won’t be adequate for our needs. He’s recommended twelve 300GB LFF disks in a  RAID 6 array. This gives 3TB of usable data. Not bad, but it fills all the bays so there is no room for expansion if we ever need it (well, there is, but it requires purchasing another drive shelf which isn’t cheap).

But I suspect that talk about spin speeds is over-simplifying the problem somewhat. The number of disks you have, and the RAID level you use, will all effect performance. Where performance is counted by IOPS. Its average IOPS that define performance, not RPM.

With the 24 bay P2000, twenty 300GB SFF disks in a RAID 10 array will give the same amount of usable data. It also leaves four bays free, so we can expand our storage later on by upto 20% – a nice buffer.

Which is better? It all depends on the IOPS. I have no idea. But given that RAID10 offers much better performance than RAID6, it wouldn’t surprise me if those “slow” SFF disks outperform the “fast” LFF disks. I could work it out with a fancy Excel spreadsheet, but my understanding of the inputs is very limited. I’m hoping someone will do it for me.

Obviously having to buy twenty disks instead of twelve will up the budget somewhat (by just under two grand). But then the argument becomes “choose LFF disks because its cheaper” and not “choose LFF disks because of the performance”. When a disk fails and I’m waiting for an array to rebuild then I’m stressed out (its the pessimist in me). RAID6 is better than RAID5 in that it can survive two disk failures, but I’d still rather pay an extra couple of grand to get the resilience of RAID10 (its possible we could have ten disks fail and still not lose our data). Because RAID6 only survives two disk failures, it doesn’t obey the law that states “bad news comes in threes”.

The other reason for choosing SFF disks over LFF is that they’re just so much sexier. I know size isn’t supposed to be important, but it is. 24 little disks in a tiny 2U bay, who couldn’t get turned on by that?

3 Hosts, 2 CPUs

I really want to re-use our existing servers when we virtualise our server environment. We have three HP Proliants that are all under two years old, so barely worn in. The issue is that two of the servers have dual Xeon E5506 processors and the other has a faster dual Xeon X5560. My reseller tells me that if you VMotion a virtual machine from a host with one processor to another host with a different processor, it might fail. The solution is to use EVC (Enhanced VMotion Compatibility). His concern, as I understand it, is that we’d effectively downgrade the X5560 to a slower, inferior, CPU. However, reading the VMWare KB, I’m not convinced that we’d notice any difference:

From the Knowledge Base (I’ve put the key sentence in bold):

“If I add newer hardware into an EVC-enabled cluster with a lower EVC mode, do I lose performance?
All CPU features provided by your host hardware are available to the hypervisor. Optimizations for CPU virtualization such as AMD-V and Intel VT-x or facilities for MMU virtualization such as AMD RVI or Intel EPT support are still used by the hypervisor. Only those CPU instructions that are unique to the new CPU are hidden from virtual machines when the host joins the EVC-enabled cluster. Typically this includes new SIMD instructions, such as the latest SSE additions. It is possible, but unlikely, that an application running in a virtual machine would benefit from these features, and that the application performance would be lower as the result of using an EVC mode that does not include the features. Check with the application vendor to determine which CPU features are used by the application.”

HP P2000 SAN, more thoughts

The comments in this post are interesting.

“The only part of the P2000 that is not redundant is the backplane. But it is a fairly simple device and after asking around I haven’t heard of very many backplane failures.”

This goes back to my earlier post, of having redundancy by having two phyiscal hosts, but no redundancy with one single SAN. The SAN is a single point of failure. It requires me to believe that the backplane won’t fail. Or spend a lot more money on a P4000 which has complete redundancy. However, I need to consider what would happen if the SAN did die, and the following comments address that:

If you make Veeam a physical machine then if the SAN dies you can leverage the new vPower features and run your VM’s directly from the Backup server until the SAN is back online and then storage vmotion them back to the SAN.

Unfortunately, I don’t think Essentials+ includes storage vMotion. The answer seems to be:

“If you do not have storage vMotion you can just power off the VM then migrate the storage. …. so you would be better of to just power off the Vm after everyone goes home. and then migrating it to the production datastore…. this will also copy any changes that happen from when you do the instant recovery until you shut it down.”

It all sounds very simple and neat.

I have a Proliant DL380G5 that I was considering using a direct attached backup server. However, it doesn’t sound like this is supported, even though it will probably work:

Did you have any issue using the SC08e HBA on a G5 server? From the QuickSpec of the product it seems that only DL generation G6 and G7 are supported from HP.
It seems due to the fact that the SC08e is PCI-Express 2.0, while the SC08Ge is PCI-Express 1.1, but I suppose that the SC08e should be backward compatible.
I’m going to install it in a DL385 G5!

It might be easier to buy a NAS anyway, £1,900 will buy me a Iomega StorCenter px4-300r with 4 x 3TB drives. As John writes “Veeam Deduplicates and Compresses inline… so as it pulls data off the SAN or the local datastore it compresses and deduplicates and then writes to the NAS.“, so I’m guessing network bandwidth may not be an issue. It would then be easier to move that NAS to the cloud when a decent internet connection becomes cheap.

It really is an excellent site.