My frustration with Hyper-V, do you really save anything?

The story of a difficult experience with Hyper-V

If you follow my Twitter feed you knew this article was coming…

After spending about 3 hours on the phone with a customer trying to help them get their CSV (Cluster Shared Volume) for Hyper-V back online (I was on the phone not because I’m a Hyper-V guy but because I implemented the storage they use for Hyper-V) I decided that maybe I should do a deep dive and learn a little more about how CSV’s work so that I can better compare them to VMFS. VMFS is the clustered file system that VMware uses to share SAN LUNs between physical VMware servers. This article also goes on to explain why Hyper-V is an inferior solution due to some of the other issues that I’ve seen in my limited experience with Hyper-V.

Anyhow, before we get to the technical stuff I need to point out that I am obviously biased to VMware… it’s what I do every day. I will try to be as fair as possible, but lets face it if you are just looking for the take away without reading the whole post it is that VMFS is far superior to CSV’s, and that while VMware might look more expensive on a bill of materials it will probably save you time and money in the long run.

The Technical Stuff

Cluster Shared Volumes have been around a long time, Microsoft has been using them for everything from Exchange clusters to SQL clusters. They adapted CSV’s to work with HyperV so that they could allow virtual machines to more easily move from one HyperV host to another, similar to what VMFS allows VMware ESXi servers to do. Both enable high availability for virtual machines because if a host fails, other hosts can access the virtual machines the failed host was running. Additionally CSV’s are needed because under them is NTFS… which was never designed to be accessed by multiple systems at the same time, because of this something had to be put in place to allow that to happen.

Ok so the first article I come across on Technet has this to say:

“…the Cluster Shared Volumes feature included in failover clustering is only supported for use with the Hyper-V server role. The creation, reproduction, and storage of files on Cluster Shared Volumes that were not created for the Hyper-V role, including any user or application data stored under the ClusterStorage folder of the system drive on every node, are not supported and may result in unpredictable behavior, including data corruption or data loss on these shared volumes. Only files that are created for the Hyper-V role can be stored on Cluster Shared Volumes. An example of a file type that is created for the Hyper-V role is a Virtual Hard Disk (VHD) file.
Before installing any software utility that might access files stored on Cluster Shared Volumes (for example, an antivirus or backup solution), review the documentation or check with the vendor to verify that the application or utility is compatible with Cluster Shared Volumes.”
Taken from: http://technet.microsoft.com/en-us/library/dd630633%28v=ws.10%29.aspx

So to me, that means that CSV’s are flaky, to say the least… but let’s continue.

VMFS, on the other hand, can store pretty much anything you can upload to it… zip files, iso files, etc etc… VMFS is almost like LVM in linux, it doesn’t care what you put on it.

My Next Point

All HyperV nodes that are using a CSV are at the mercy of the coordinator node for that CSV. Think of it this way, you need to look for something that is in a filing cabinet, but before you can actually get the folder you need you must first talk to the secretary and ask her if its ok to look at the folder. In more technical terms this means that the coordinator node keeps track of all the metadata and file locking on folders on the CSV, after the coordinator node allows you access to the folder then IO to things in that folder happen directly to the LUN. But don’t take my word for it…  Microsoft explains it in this article http://support.microsoft.com/kb/2008795

What scares me about this method is that if something gets hosed with the Coordinator node and the Failover Manager doesn’t fail over properly your CSV is inaccessible. And going by the reliability track record of Microsoft Services I would not bet my job on Failover Manager 🙂

VMFS, on the other hand, is a clustered file system that has no owner… there is not anyone node that controls access to the file system. File locking is done at a file level by a ‘pulse field’, and in this field, a host must periodically update its time stamp and let the file system know that it is still using the file. If a host crashes and another host wants to use the file the host can let the file system know that the timestamp hasn’t been updated lately and that it is taking over ownership…. this means that each host can access files in the event of a node failure without waiting on a response from a centralized management node. If you want the in-depth answer check out this article http://blogs.vmware.com/vsphere/2012/05/vmfs-locking-uncovered.html

Next, let’s talk about how to get HyperV to actually work …. and I’m not talking about just getting a VM to boot up. I’m talking about setting up HA and DRS and automatic load balancing etc. With VMware you group all of your physical servers into a Cluster and then check two boxes… one to turn on HA and one to turn on load balancing. Of course, you need to set up network interfaces for vMotion… but other than that your done. Oh, and by the way, this was all done from the VMware vSphere client.

On to HyperV… lets see where do we start…. no really… which interface do you want to start with. If I want to configure virtual machines I’ll need to use HyperV Manager… or maybe Systems Center Virtual Machine Manager (SCVMM). If I want to set up CSV’s to allow for HA to take place I’ll need to fire up MS Failover Manager. On and if I want to actually format a LUN, well then I’ll need to get into Disk Management.

I’m sure you get my point….

We could talk about virtual switches and distributed virtual switches (or the lack there of), or things like NIC teaming and how it is implemented, but I don’t want to write a book.

The take away

OK so normally the number one reason I hear that people are using HyperV is because it comes with Windows … Its free. But is it really free? I could argue that it takes less time to implement VMware than it does HyperV… and isn’t it known that “Time is Money”… it certainly is if you are paying someone to set it up.

I could also argue that 3+ hours of downtime trying to resolve a CSV issue where VM’s are not accessible is definitely a loss of productivity and in turn a loss of money.

Lastly SCVMM is not free and while it is not required, if you want to compare apples to apples you will want it…

So in the end VMware may be an additional line item on a bill of materials, but in the end it may be the best damn investment you will make for your virtual environment.

If you are not convinced yet here are some other fun stories about the HyperV

  • http://www.ms4u.info/2011/05/why-you-should-not-running-domain.html <- Failover manager required AD, if you virtualize all your Domain Controllers Failover Manger wont start.
  • http://blogs.technet.com/b/chrad/archive/2010/07/15/cluster-shared-volumes-csv-extending-a-volume.aspx <- Expanding a datastore in vmware is as simple as right clickign the lun to extend, selecting expand, clicking next like 3 times…. check out this link for the process for a CSV 🙂
  • do a google search for HyperV live migration using multiple NICs…. you wont find much, so if you buy that server with 128 or 256 GB of ram and need to take it out of production for maintenance… better have 10Gbps networks in place or grab dinner and a movie… (1Gbps network can theoretically move 125MB/sec or 7.5GB/minute so to move 256GB that would be about 34 minutes… so if you want to do a rolling outage multiply the number of hosts you have by 34 minutes to move all that ram….  just one more way VMware costs you less)
  • Look up how to do nic teaming in HyperV… Then look up how to do it in VMware

The bottom line

If you want a solution that is straight forward and easy to use then HyperV is probably not the way to go. While it may not be a big line item on a bill of materials, what it saves you there it is certainly to cost you in time and effort (not to mention frustration).

Loading

Share This Post

123 Responses to "My frustration with Hyper-V, do you really save anything?"

  1. Apologies for being blunt but this is an absurd article. You are only hurting your credibility by leaving it up.

    Your arguments are based on your lack of knowledge or finding even more links to other people who have no idea on how to work with Hyper-V.

    You comparison is also based on Windows 2008 / Hyper-V v1 vs current VMWare (as of the time of writing) and also Hyper-V vs vCenter. For a real comparison you would need to use vCenter vs Virtual Machine Manager.. oh wait, then vCenter would fall well behind rendering the article useless…

    The article you reference on technet just says that’s what the CSV’s were designed for. It’s to warn teh engineers out there that have no clue from doing something stupid like trying to share a CSV as a file server storage… As you worked out, a CSV is on NTFS. This means it can have any file on there.

    Then you go on to say about the reliability of failover cluster manager and redirected OI. This is only ever a problem if the cluster is not built with appropriate CSV network adapters. It’s a non-issue in 2012 but is clearly written in 2008 docs and every time I was called in to help someone with a buggy 2008 hyper-v cluster using CSV’s was because they deployed it without follwoing MS guidance… Who’s fault is that?

    Also the fun stories are ridiculous.
    1) An engineer who knows the first thing about virtualization knows that best practice is to have a physical DC. In the absence of that, you ensure that not all of your DC’s are not in your clusters. Virtualization 101

    2) Expanding a CSV is easy. That doc is based on 2008.

    3) Team NICs or Virtual Nics attached to the vSwitch on the host… as many NICs as you want for Live Migration. I don’t even use the gui, all my networking is scripted and up in seconds

    4) Right-click “add to new team”. Done. The mistake MS made in 2008 is they left teaming up to the vendors thinking the vendors would do it best. Um no. Fail. 2012 MS took over and Teaming is a cinch.

    Rant over… but seriously, take this down and maintain some dignity.

  2. Thanks for the comment David!

    This article is 2 years old… clearly they have improved in that amount of time… still doesnt change the point of view i had 2 years ago.

    Have a great day!

  3. This is directed at David. No offense intended.

    Apologies for being blunt but that was an absurd comment. The article is comparing ease of setting up a Hyper-V cluster with the ease of setting up a VMware cluster. Having set up MANY of both, I happen to agree with Justin.

    1) Yes that is true FOR HYPER-V ENVIRONMENTS – as a Windows Domain is a pre-requisite for a Hyper-V cluster. Virtual environments are designed to be up and running indefinitely, and with DRS affinity and anti-affinity rules you can keep your DCs separate and on different hosts. If you design the infrastructure properly, there’s no need for physical DCs and that’s in fact an old-school way of thinking, virtualisation 101?

    2) Expanding a CSV is easy, but not as easy as expanding a datastore in vSphere – the foundation of Justin’s article is comparing the two from a “how easy is it to do” perspective.

    3) Yes, easy if you’re using SCVMM but you need to use Powershell if you’re not. I’m talking about converged networking. Since Hyper-V dominates the small business sector, Powershell it is. These businesses will not fork out thousands on the licensing – if they would have, they would have probably gone with VMware.

    4) Easy if you’re running the GUI based version, what about Core? And I’ve personally had many issues with Server 2012 R2 NIC teaming, with Hyper-V, where the team suddenly stops passing management traffic – yet my VMs are fine.

    Sorry – you can’t come in here and start dissing somebody in the industry who I look up to because you feel like having a rant. If you were a part of the VMware community you’d know that, Justin is an accomplished and respected virtualisation consultant.

    By the way, I happen to do a lot of work based around Hyper-V as well. But I tend to work more with VMware hence me going through the paces to become a VCDX.

    Currently, VMware has a 50% share in virtualisation. Microsoft close behind with 30%, and the rest making up the remaining 20%. These statistics can be looked up, and are accurate as of June 2015.

    A bit like comparing Apple to Samsung.

    I am very fond of Hyper-V indeed, but VMware and vSphere will be my first choice as long as the technology stays as good as it is. There are other players in the game who are emerging and trying to take VMware’s throne, Nutanix for one.

    …. but seriously, take down your comment and practice what you preach.

    Again, no offense intended.

  4. Graeme (or is it Justin in disguise.. only joking..),

    Justins post was as you say, setting up a Hyper-V cluster vs VMWare cluster. The difference being that VMWare you need vSphere. This means you should compare it to SCVMM.

    Also, while we’re on vSphere vs SCVMM, you buy SCVMM you get everything, SCOM/SCORCH/SCDPM/SCSM/SCCM & App Controller. This means you get every feature that Hyper-V/SCVMM can do all at once. No licensing upgrades to be able to vMotion/Replicate/Distributed Switches/Network Virtualisation/Automation/Host Profiles/Contenet Library/vGPU etc etc… For VMWare you need a masters in accounting just to work out the licensing requirements..

    In response to your points on my points..

    1) MS recommend a physical DC always. Visor agnostic. Just because it’s common to see DC’s being virtualised doesn’t mean it’s right. It’s just the trade-offs are weighed up and people make a call. And to be clear, all DC’s I deploy are VM’s it’s just the risk we accept as norm. MS built Failover Clustering, hence it relies on a DC. Be smart, don’t put all your DC’s in the one cluster or in a cluster at all.. In a typical medium sized environment where I have 2 DC’s, these just end up on separate Hyper-V host in an unclustered fashion. Problem solved.

    The “Failover Cluster needs a DC” argument just highlights bad engineering/architecture.

    2) An experienced Hyper-V engineer would know how simple it is in VMM. Oh, and that argument is blown away by SMB3 storage in Scale-Out File Servers..

    3) Ok, small business trying to deploy Hyper-V clusters.. They may have a gap. But if they spend the money they get VMM. It’s not as big of a tax to go VMM as it is to go VMWare so it’s a moot point. Also, even if the price is the exact same, you’d be bonkers to go VMWare.

    4) Core? Simple.. “New-NetLBFOTeam”… If you don’t know how to use basic powershell or the internet to find you the commands then you have no place deploying, managing or troubleshooting a cluster of any make or model.

    Ok, I may have been too aggressive in my comments, apologies for that. I am just tired of hearing and reading blind comments by VMWare fan club members who refuse to face the truth.

    I will agree with VMWare clusters being worth consideration for a small business as SCVMM is designed for medium/large scale deployments.

    The fact remains that yes the article is two years old (kinda makes me look silly for commenting), but even then at the time it was based on comparing the full featured vCenter vs an old version of Hyper-V.

    It’s only fair that a comparable article be written comparing VMM vs ESXi.

  5. David,

    Fair play to you!

    I totally agree with you that there are people on both sides of the fence advocating why their choice is better than others’. It’s annoying, as often the people in one camp are uneducated and / or inexperienced with their choice’s rival product.

    With regards to the point of not being as taxing to go with VMM compared to VMware, that really depends on the version. Enterprise Plus, maybe not, but does Hyper-V have a similar feature set compared to vSphere Ent Plus? Last time I checked (fairly recently) it was no. Things like Storage I/O, Storage DRS, Network I/O control. Here’s another comparison, which I’m sure Microsoft is working like crazy to match: https://www.vmware.com/uk/why-choose-vmware/robust/robust-foundation

    Can you increase the size of a VHDX without having to shut down the VM? Last time I tried it, no.

    It’s a great time for people like us to be employed in virtualisation and cloud technology, because we can literally take our skillset and go sideways – the principals don’t vary much! Sure, it’s different, but different doesn’t always mean worse. Luckily I get to work with both, but I’m really not too worried which one ends up on top. It’s not like we’re petrol engine mechanics in a world that’s moving towards electric engines.

    Eventually when I do reach my VCDX status, I will be used to a certain way of thinking and those skills will be relevant as long as virtualisation is relevant.

    I’m guessing you Googled something similar to “Hyper-V vs VMware” to end up at this particular thread, and it’s something I often Google myself when I’m putting together a solution for a particular client, because I get to give the customer some real world proof to justify my design choice. The guy I work with who’s the Hyper-V specialist still works alongside me and me alongside him and we both learn new things every day!

    Happy virtualising.

  6. Graeme,

    You’ve been able to hot expand scsi disks since 2008R2. The issue was that Hyper-V VM’s used IDE as the boot disk meaning the VM had to come down to modify the vhdx.

    Since 2012 R2 (3 years now) Generation 2 VM’s use scsi as the boot disk meaning you can hot expand all vhdx’s on the fly, not just the data disks. This is a little annoying for me that MS limit the VM generations to big changes where as VMWare have many VM versions with various levels of features.

    Storage I/O, Storage DRS and Network I/O control are all in since 2012R2. MS are just terrible at marketing their development. It’s also due to these technologies and features having no standardised naming so VMWare call it one thing and Microsoft call it another.

    That comparison really is just a poor attempt at propaganda. I won’t go into detail as that would be a whole other thread but the majority of those points are either exaggerated, a blind perspective or just wrong. Also some of the features in VMWare tick column are in Hyper-V but they make no mention in the Hyper-V column.. i.e. “Supports 3D graphics” – Hyper-V have had RemoteFX since 2012.

    What I do enjoy about that document you linked though is how little the differences are between the visors and VMWare really have just highlighted the fact.

    I haven’t looked up any documents, just commenting from knowledge of Hyper-V. These days I’m starting to lack a little in the VMWare space but virtualisation is virtualisation. As you say our skills are transferable no matter the brand name.

    I was wondering how long it would take for you ask how I ended up on this thread. It was a problem with a CSV in a hyper-V build I am looking at. In the middle of a build and someone went and placed 20TB of VM’s on the cluster well before it was even close to ready and they’re experiencing weird file corruption.. It’s a configuration/usage issue, not a CSV reliability issue… but i digress…

    Good luck on your VCDX status. Oh that’s another thing VMWare have done well, the training and certifications. So many certified engineers do not want their heavily invested certs to be void so it adds another level of attachment to their stance on the anti-hyper-v debate.

  7. Unfortunately I’ve not been able to dive into large Hyper-V environments (as my colleague does) but it sounds like they’ve matched their feature set with VMware’s. Of course the link is just blatant marketing for the layperson or IT Director who’s not technical.

    The VCDX is certainly an “all-in” commitment, and afterwards I’m looking to do my CCNP however I need to find time to squeeze Microsoft Private Cloud in too.

    Their training and certifications are very good and test ability fairly, although the actual certification structure is all over the place and keeps changing. The VCP expires after two years, and is versioned (i.e. 4 / 5 / 6 etc.) yet other certifications don’t expire as long as the holder has a valid VCP! They should take version off the VCP – much like Cisco and their certifications and up the expiry to 3 years in my opinion – once you’re certified in a certain version that should be it.

    It’s funny you mention investment in a particular certification track and being more attached to their chosen vendor – that’s really true! I will say that I know a few guys with their MCSE Private Cloud certs, and they achieved them relatively easily compared to the amount of work required for VCAP / VCDX. Microsoft is so much larger as a company that they need to streamline all of their tracks, whereas VMware specialises in one thing so the community is much closer as a group. There are a lot of things I want VMware to do differently, prices need to be reviewed for one, and charity / education deals need to match Microsoft’s because it’s a no-brainer for them on which platform to virtualise on with their dirt cheap licences – 2012 R2 Datacenter being £186 for a dual socket CPU, SCVMM being a couple hundred as well.

  8. April 2016 and I’m having same frustration. Decision has been made to move from Hyper-v setup to VMWare. I’m just thinking around a SQL Cluster is setup on one of the hyper-v how I could move it to VMWare without going back to Physical or Virtual RDM LUNs..

    Appreciate your feedback.
    Regards,

  9. Hi Justin,
    I thought of this option SQL always On, but in my case I’m running SQL Standard.

    With regards to the MSCS cluster, in the current setup on Hyper-v the LUN is assigned to the host and individuals vhd disks created for the clustering purposes. I don’t think the RDM would be an option between VHD and VMDK, the risk involve taking more time in troubleshooting issues 🙂

    I thought of building a new SQL Cluster on the VMWare environment and applying all best practices for SQL virtualization, Stop the applications, (Dell Wyse vWorkspace VDI ), take a backup of all DBs, import them into the new SQL Cluster, map the ODBC/ applications and hope it will work this way..

    It’s clean solution won’t bring any dirty stuff from Hyper-v and less downtime..

    Seems good path..?

    Regards,

  10. Build a new cluster in VMware. You could try and name it the same thing but you’ll need to be prepared to do some extra fancy steps to do that.

    I would suggest you upgrade the version at this time. SQL2014 lets you have more ram in STD.
    For me RDM’s are a fact of life for MSCS in VMware. Hyper-v is so much easier for virtulized clusters.

    Our environment contains several very large virtualized SQL clusters and file server clusters. I wish VMWare would create virtualized HBA adapters allowing me to map storage to a vhba FC/ISCSI. I prefer FC. RDMs suck and are nothing but a headache.

    We are now looking at using hyper-v for all our lower environments to save costs. We are looking at the Azure pack and Azure services so its possible VMware is losing its importance long term? Not sure I agree to that but I question why you would leave hyoer-v now with the next best version about to drop and how the data center is changing so much from Microsoft’s point of view.

  11. Hi Sean,
    Moving to Hyper-v would allow You to regret on the reliability, flexibility, simplicity & Security that provided by VMWare.

    To me, while I learn the virtualization concepts on vmware product, and I know the product in and out, during troubleshooting, I know what to do instead of googling.. Why I have to waste my memory in learning the same virtualization concepts again with more enhanced issues…?

  12. Hussain: earlier you wrote: “I’m just thinking around a SQL Cluster is setup on one of the hyper-v how I could move it to VMWare without going back to Physical or Virtual RDM LUNs..”

    that was the reason I responded to your post. I don’t exactly follow you on your comment “Moving to Hyper-v would allow You to regret on the reliability, flexibility, simplicity & Security that provided by VMWare.” I am guessing you don’t like hyper-v; To each his own is my response to that.

    this blog has gotten old in the tooth… with 2016 almost at RTM folks are going to have to take another look at hyper-v, again. I am seeing a resurgence in open systems due to tighter budgets and leaner requirements. Due to Microsoft licensing agreements having developer environments being basically free I’ve found leveraging Microsoft tech advantageous. Trust me this isn’t the “but I think VMware is better argument!” Instead it is the wow this is good enough and it’s free. I moved on from Linux back when retired Sun E450 systems and I’ve never looked back. If you think going back to a Linux kernel is a good thing than good for you. I also don’t wear a watch anymore and don’t plan on going back to that either. What I hope is that a new version of this blog gets wrote by the author.

    All I am going to point out is that MSCM is a pain in the ass within VMware. We are now on ESXi 6.x and nothing’s better… we are using the SSD read cache option and that was not very impressive. RDM’s suck and they are the bane of my existence and I hate them. MSCM is so much easier in Hyper-V. If your building a number of Failover cluster nodes or anything outside of always on clusters your going to need RDM’s and everything is shit. Honestly I think most folks would just tell you not to do MSCS within VMware.

    VMware could fix all of this by creating Virtual HBA’s and connecting them to FC/iscsi physical HBA ports within the hypervisor hardware. Then RDM’s would be a thing of the past and your nodes would have the ability to move (DRS) around the cluster as they needed. Once you need an RDM, your are essentially pinning that Node to a physical node in your VMware environment. That makes maintenance a pain in the ass. Trust me… MSCS sucks in VMware.

  13. seanfromchicago – I’ve never had any problems whatsoever with MSCS on VMware.

    Unless I’m missing something MSCS sits on top of the infrastructure regardless of what it is and clusters at the OS level.

    I have a friend who deployed 64 Hyper-V hosts across 8 Cisco UCS chassis’ and he told me that it was (and still is) incredibly difficult to get to work… it just needs ridiculous amounts of massaging. And he also says (being the Hyper-V expert that he is) that VMM 2016 still sucks compared to vCenter.

    Of course if we sit and argue about all this we’ll get left behind because it’s all changing again anyway with HCI, VSAN and Nutanix trying to enter the hypervisor space.

    If you were a customer of mine I wouldn’t have advised moving to 6 yet, although U1 has improved a lot.

    PS I still wear a watch – have you moved back to pocket watches? Sweet!

  14. I agree with David in a lot of points.

    This article is based on Justins expierence (or the lack of experience) with hyperV, compared to “high skills” with vSphere.

    Let me write the same article and HyperV wins 🙂

    At the end – it is not a question of one hypervisor beeing superior – it is a question of the Administrators ability to maintain the choosen hypervisor.

    “If you judge a fish by its ability to climb a tree, it will live its whole life believing that it is stupid.”

  15. Graeme- Well the 8 node Hyper-V cluster across HP DL580’s (4 20 core CPU
    s and 2TB of RAM each) worked like a charm. Was pretty easy to setup and manage. I suspect the UCS is more of the pain point that Hyper-V. I’m not a fan of the UCS platform.

    I’m not going to argue. I am going to say that RDM’s are crappy and I hate them. And I don’t think people are going to say they love RDM’s. Having to use RDM’s for MSCS on VMware is a pain. I vote for getting rid of RDM’s and all the limitations they bring to the table and I think replacing them with VHBA’s would make sense. That is going to require some trickery or just ignoring the storage that is presented via the VHBA. I assume I don’t have to list all the bad things about RDM’s here.

    VSAN? that is a waste of money. Those licenses costs for VSAN are way too high to be cost effective. I will take a SAN over trying to do it with VSAN. Have gone down the road twice and VSAN was just a waste in my opinion. Nutanix is just one player in the space and while they are cool there are plenty of other cool options. I personally like Nutanix as a VDI platform. The Dell offering of Nutanix is pretty reasonable cost wise.

    We moved to 6 and so far so good. We have been pretty conservative in the migration. We moved a number of our environments before prod. We don’t have any complaints at this time.

    My phone is my watch and has been since 95′. I own a pocket watch and I own watches but they are for special occasions and are worn as jewelry not for function.

  16. Hey seanfromchicago. RDMs are old school, I have moved many customers away from them, and as it stands MSCS is the only real reason why you’d want to / need to use them. I agree that they should go, but just saying I’ve never had any issues with them other than trying to migrate away from them. Physical RDMs impose huge limitations (like you say locking a VM to a host) whereas virtual RDMs offer more flexibility – but both are yesterday’s news IMHO.

    VSAN is VMware’s answer to HCI. The product provides very good performance with commodity hardware, but I agree the licences are too expensive. I think for greenfield setups there’s a strong case for it, but where I proposed it to a customer recently they opted for the more traditional and more mature SAN option.

    I wear a sports diving watch for my every day watch, and since it’s a Tag I can wear it for special occasions too!

  17. Hi Sean,
    Somewhat I agree with you, and you are right when you said ( I hate Hyper-v) due to its complexity of CSV and Networking redundancy.

    With regards to the RDM, I’ve used Physical and Virtual RDMs with only one issue where the LUNs mapping was not contestant across all hosts and that’s due to the EMC Ax5i iSCSI SAN, which causes migrating the VM from one host to another not possible.

    Back to the original discussion regarding the migration of Hyper-v VMs (In Particular the SQL MSCS). They way how this setup in hyper-v, LUN is assigned to host, Guest OS/VM created on those LUNs, MSCS, Quroum, SQL data all created on .vhd disks. No way to introduce a physical Node, a Hyper-v VM node hosted on different host, nor a vmware VM to access the same data/LUNs.

    Option-1. Take the risk and play around with V2V conversion.

    Option-2. Build fresh MSCS cluster or different SQL highly available solutions. Backup SQL / Stop the applications / Export / Import DBs, and map the applications in the new instance.

    Option-3. Waste time leaning Hyper-v and live with it.

    P.S: I stopped wearing watches since long time. ?

    Regards,

Post Comment