A Server error occurred. [500] SSO error:null

 

Previously, I wrote a post about re-pointing the PSC and after a while I logged in to the same environment and I ended up with this error message on my screen. I was so frustrated as I have done few changes to this environment and thought something went wrong while I was doing these changes. See below error message that I received from my vCenter Server appliance.

So , I started to troubleshoot this issue. Simply it says check the vSphere Web Client logs for more details, that is the simplest way which we can start this. so I opened a ssh session to my vCenter server appliance and enabled the shell. I used below command to check the vSphere client log

Read More

Nested Virtualization: VCSA 6.5 deployment on Oracle Ravello Cloud

 

I was building a lab on Oracle Ravello Cloud and I wanted to install VMware vCenter Virtual Appliance 6.5 on a deployed ESXi host. I started the deployment as usual and the deployment failed in the middle of the VCSA configuration. It was not able to power on and below error message deployed in the ESXi host client. “Failed to power on virtual machine <VM_NAME>. You are running VMware ESXi through an incompatible hypervisor. You cannot power on virtual machine until this hypervisor disabled“. See below error message.

So I tried to manually Power on the Virtual Machine and was not able to do that and ended up with the same error message again and again. Read More

ESXi PSOD due to a PCPU becomes too busy

One of the ESXi hosts failed with a “Purple Screen Of Death” and below analysis found as the root cause of the failure.

It was sitting in a vSphere 5.5 and lower patch level version to 30xxxxx. We were not able to identify any hardware failures or any error related to the server hardware. Also I can confirm that it was configured with the correct drivers.

This is the part of an error logs we found in the failed ESXi host

2017-09-12T05:25:54.232Z cpu37:66166510)MCE: 1118: cpu37: MCA error detected via CMCI (Gbl status=0x0): Restart IP: invalid, Error IP: invalid, MCE in progress: no.
2017-09-12T05:25:54.433Z cpu37:66166510)MCE: 1118: cpu37: MCA error detected via CMCI (Gbl status=0x0): Restart IP: invalid, Error IP: invalid, MCE in progress: no.
2017-09-12T05:25:54.633Z cpu37:66166510)MCE: 1118: cpu37: MCA error detected via CMCI (Gbl status=0x0): Restart IP: invalid, Error IP: invalid, MCE in progress: no.
2017-09-12T05:25:54.832Z cpu37:66166510)MCE: 1118: cpu37: MCA error detected via CMCI (Gbl status=0x0): Restart IP: invalid, Error IP: invalid, MCE in progress: no.
2017-09-12T05:25:55.034Z cpu37:66166510)MCE: 1118: cpu37: MCA error detected via CMCI (Gbl status=0x0): Restart IP: invalid, Error IP: invalid, MCE in progress: no.
2017-09-12T05:25:55.235Z cpu37:66166510)MCE: 1118: cpu37: MCA error detected via CMCI (Gbl status=0x0): Restart IP: invalid, Error IP: invalid, MCE in progress: no.
2017-09-12T05:25:55.434Z cpu37:66166510)MCE: 1118: cpu37: MCA error detected via CMCI (Gbl status=0x0): Restart IP: invalid, Error IP: invalid, MCE in progress: no.
2017-09-12T05:25:55.634Z cpu37:66166510)MCE: 1118: cpu37: MCA error detected via CMCI (Gbl status=0x0): Restart IP: invalid, Error IP: invalid, MCE in progress: no.
2017-09-12T05:25:55.833Z cpu37:66166510)MCE: 1118: cpu37: MCA error detected via CMCI (Gbl status=0x0): Restart IP: invalid, Error IP: invalid, MCE in progress: no.
2017-09-12T05:25:56.032Z cpu37:66166510)MCE: 1118: cpu37: MCA error detected via CMCI (Gbl status=0x0): Restart IP: invalid, Error IP: invalid, MCE in progress: no.
2017-09-12T05:25:56.231Z cpu37:66166510)MCE: 1118: cpu37: MCA error detected via CMCI (Gbl status=0x0): Restart IP: invalid, Error IP: invalid, MCE in progress: no.
2017-09-12T05:25:56.429Z cpu37:66166510)MCE: 1118: cpu37: MCA error detected via CMCI (Gbl status=0x0): Restart IP: invalid, Error IP: invalid, MCE in progress: no.
2017-09-12T05:25:56.628Z cpu37:66166510)MCE: 1118: cpu37: MCA error detected via CMCI (Gbl status=0x0): Restart IP: invalid, Error IP: invalid, MCE in progress: no.
2017-09-12T05:25:56.629Z cpu37:66166510)MCE: 222: cpu37: bank7: status=0xcc000f4000010091: (VAL=1, OVFLW=1, UC=0, EN=0, PCC=0, S=0, AR=0), ECC=no, Addr:0x526e3600 (valid), Misc:0x390261e840 (valid)

 

This was identified as the root cause: PCPU becomes too busy logging all the correctable error messages to perform routine background tasks, leading ESXi to assume that PCPU is unresponsive.

Possible tasks to correct the Error: To fix this PSOD error we had to update the 5.5 Patch version to 3568722, however the latest patch version available to 5.5 is 5230635.

You can read More about this in below KB articles: 

 

Installation of component VCSServiceManager failed with error code ‘1603’. Check the logs for more details.

 

I was deploying a vCenter 6.x with a separate Platform Service Controller and a database, installation was not successful due to the below error messages. In the middle of the installation it gave me an error message saying “An error occurred while starting the service ‘invsvc'” and rolled back the installation at the end.

 
At the end gave me the below error message

Read More

An internal error has occurred – Error #1009 – VIO Deployment Error

This is not a quiet a post! but I thought to write this post to show that the importance of the VMware Product Interoperability Matrices before you start any deployment. One of my friend was deploying the VMware Intergrated OpenStack in his environment and selected to use the vSphere 6.5 as the vCenter server which is the latest version of the vCenter Server. There was no issues with the vCenter server deployment and successfully deployed the vCenter server and started the VIO 3.x deployment.

He was facing this issue at the step of selecting the management cluster in the deployment and was trying to find s solution.
He asked the possibility of getting this type of error and I checked the VMware Product Interoperability Matrices against the vCenter server 6.5 and the VMware Integrated OpenStack and found this
Also, I searched whether there is any KB article from VMware related to this error. I found this article and shared the details. This article says “At this time the VMware  vCenter Server 6.5 for use with a VIO 3.x Deployments is not Supported. Please also note that VMware vSphere Server 6.5 is not supported for use with any of the VMware Openstack Versions.” At the time of writing this article VMware Integrated OpenStack 3.1.0 was not released and it is supporting to the vCenter server 6.5.
Article Updated on:
KB: 2148068
Updated: Dec 14, 2016
VIO 3.1.0 released on :
So, this is a good example of the impact of proceeding without checking all your dependencies such as VMware other solutions, Databases and upgrade paths before we proceed with the deployment.
This is a really cool and easy tool to use, Always use this before you proceed.

“The Client Integration Plugin 6.0 was not detected.” – VCSA 6.0 Web client error

I was deploying VCSA 6.0 (VMware-VCSA-all-6.0.0-5326177 and VMware-ClientIntegrationPlugin-6.0.0-4911605 are the versions) Appliance in my VMware environment and I was not able to open the “vcsa-setup.html” file to deploy the appliance. It was throwing the below error message even the Client Integration Plugin 6.0 was installed in the system.

I searched this in the internet and there were lots of workarounds and unfortunately nothing worked for me. This has been discussed in the VMTN community but there was no solution found at the time of writing this post.
One of the work around provided in the internet was to enable “NAAPI” in the chrome browser, but that option has been removed in the latest versions of the browser.
I tried to upgrade my chrome browser and it didn’t work.I had the latest version of Chrome installed and updated.
Also, I had the latest version of the Mozilla Firefox browser and it didn’t work either.
After installing the Firefox browser it was throwing the below error message every time I open the “vcsa-setup.html” file.
I opened the “vcsa-setup.html” file from the Internet Explorer and it worked. At the moment the only solution that I have is use the IE11 browser. Even though I don’t like it…!!! Anyway thank you Microsoft…!!!

VMware VSAN: Error occured while deleting a folder

Once I have completed my vsanDatastore creation I wanted to test the datastore, so I have created a Test folder called “New Folder” inside the vsanDatastore and I tried to delete the folder soon after I created it. Unfortunately It gave me the below error message saying “Cannot delete the file [vsanDatastore] New Folder” and was not able to delete it.

I did a quick google search and found the Blog post created by Duncan Epping in the “Yellow-Bricks.com” and I followed the steps.

I changed the directory to “/vmfs/volumes/vsanDatastore” and list the content inside the vsanDatastore

As Duncan mentioned I typed “/usr/lib/vmware/osfs/bin/osfs-rmdir <name-of-the-folder>” command and It gave me the below error message

As the next step instead of the “Folder Name” I typed the folder value in the command and it was successfully deleted.

I just wanted to share my experience with vsan and I would like to thank “Yellow-Bricks.com ” and Duncan Epping for saving my life as always.

VMware VSAN : Setting Up the VMware VSAN

As you may aware VMware VSAN is a new software-defined storage tier for VMware vSphere which is bringing the benefits of the Software Defined Datacenter (SDDC) to Storage. Using a VMware Clustered ESXi Hosts SSDs and HDDs it creates a flash-optimized high resilient shared Datastore for the ESXi Hosts. 
Here I’m not going to discuss about the benefits and the new features of the VSAN and I will discuss those things in a different post in my blog. Firstly, I would like to give you the overall view of the VSAN in a graphical way which you can easily understand the picture behind the scene. Image courtesy goes to the VMware Developer Center. 

VSAN is natively configured with vSphere and you can easily configure in few mouse clicks. I’m going to show the clear steps which I followed to build my Virtual SAN which I created with three ESXi nodes. 

Before I move in to the implementation I would like to mention the requirements of the VSAN implementation. This is the VMware recommended requirement for the VSAN

  • At this moment VMware vSphere 6.0 U2 is recommended with vsan erasure coding (RAID-5/6) (My configuration is based on vSphere 5.5)
  • Minimum of three ESXi hosts
  • 6GB of Minimum memory for configuration and 32 GB of minimum recommended memory
  • In production if ESXi hosts exceeding the 512 GB memory dedicated magnetic disk required for the ESXi hypervisor installation, SD and USB devices are not supported. 
  • Certified Disk controller , Pass-through/JBOD needs to be supported otherwise single disk needs to be configured as RAID-0
  • At least one certified flash disk for caching 
  • Dedicated 1 GbE NIC or shared 10 GbE, VSAN traffic can be shared with Management, vMotion, Fault Tolerance if it is 10 GbE
  • Multicast should be enabled in the VSAN network (L2 or L3)

Setting up the virtual network for the VSAN

First of all I’m going to setup my Virtual networks for the VSAN traffic and I’m using Standard switches with two vmnics. There is no difference of creating a Standard Switch in this configuration and you need to get familiar with the vSphere web client for the VSAN configuration and you can only do this configuration using the vSphere web client. But for a complete article I would like to provide all the steps to get start with VMware VSAN.
Select one of your ESXi host and navigate to Manage and Networking section, under Virtual switches click on “Add host Networking” create your virtual switch for the VSAN.

Select the connection type as the “VMkernel Network Adapter” and click “Next”

I’m going to create a new “Standard Switch” for this VSAN traffic and select “New Standard Switch” and click “Next”

Add two “vmnics” in the next step by clicking the green plus sign under “Assigned Adapters” and click “Next” to continue.

Add a network label and enable the Virtual SAN Traffic in the next step. Click Next to continue

Add an IP Address for your VMKernel and click “Next” to continue.

Review the configuration and Click finish to complete the VSAN standard switch configuration.

My Standard Switch Configuration is like this

Note: You can use the Distributed Switch for this Configuration and You can enable the SAN traffic to flow through the Distributed Switch. As I mentioned before there is no difference of the Network configuration other than allowing the VSAN traffic in your vMKernel. There is a good thing , I believe that you know there is an Enterprise Plus license requirement for the VDS and VSAN license allows you to create a VDS for your VSAN Distributed Switch. 

Create the VSAN Cluster

Now what I’m going to do is create a VMware Cluster and turn on the VSAN feature to create the VSAN. 
Right-click on your Datacenter and select “New Cluster…”
Provide a cluster name as usual and check the VSAN option and select the disk addition methods to claim your disks for the VSAN. 
There are two options to claim disks for the VSAN : 
  • Automatic – It will claim all the empty disks in your ESXi hosts for your VSAN configuration
  • Manual – You have to manually add any new disk for your VSAN 

 I’m using manual mode for my VSAN configuration

Note: You have to have a separate VMware license for this and 60 days of evaluation licensing will be selected by default. This VSAN feature is not covered under your vCenter License.

Now drag and drop your ESXi host to the created cluster

You can see the resource and disk utilization and the disk management under the cluster management Section

 

Creating Disks Groups for the VSAN

We have successfully created the VMware VSAN and now it is time to create the disks groups for the VSAN. When we are creating the VSAN it contains either a combination of Magnetic disks and Flash drives (Hybrid VSAN Configuration) or all Flash drives (All-flash configuration).

  • Hybrid VSAN Configuration – one flash device and one or more magnetic divice are configured as a disk group. Normally, one disk group can have up to 7 magnetic disks. Flash devices serve as “read-and-write” cache for the Virtual SAN datastore while magnetic disks serve to create the capacity. By default VSAN use 70% of it’s flash capacity to read and 30% of flash capacity to the write cache. However new features which are available in 6.2 like “De-duplication and Compression” is not available in Hybrid VSAN Configuration.
  • All-flash Configuration – Flash devices in the cache tier is only used for the write caching and there is no read caching as the read performance of the capacity devices are more than sufficient for the Virtual SAN. 

Go to disks management and click on “Claim disks” to claim your Disks for the VSAN

Select the disks which you are going to add to your VSAN datastore, you need to select at least one SSD disks for the Configuration

It will perform a disk validation while you are adding the disks and will throw an error if you haven’t select the correct combination of the disks.

Once you are correctly added the disks to your VSAN datastore it will create the “vsanDatastore” for you and you can see it under your datastores.

 

Possible Errors when you are adding disks and creating the vsanDatastore

I encountered with the below error when I added more than one Magnetic disks to my VSAN and it allows only 1 magnetic disks for a Host with 4GB RAM. Error is pretty straight forward and you can easily understand the issue. Once I selected one SSD and One Magnetic Disk I was able to create the vsanDatastore without an issue.

Also there was an error occurred saying “Unable to create LSOM file system for VSAN disk…” and I found that this can be occurred due to the time difference of the Hosts and the vCenter. I have corrected this and it was not able to fix the issue. Finally, I had to increase the RAM capacity of the Hosts which error occurred and it fixed the issue.

It became a lengthier post and hope you enjoyed and found some value in this. Thanks for reading my post. 

MDT 2013 reference Image Deployment error

I was working on a MDT 2013 Deployment and I was deploying a reference Image to test the MDT System. I got the below error message on the screen.

“Windows failed to start. A recent hardware or software change might be the cause. To fix the problem:

  1. Insert your windows installation disc and restart your computer.
  2. Choose your language settings, and then click “Next.”
  3. Click “Repair your computer.”
If you do not have this disc, contact your system administrator or computer manufacturer for assistance. 
        File:  WindowsSystem32bootwinload.exe
        Status:  0xc000000f
        Info: The application or operating system couldn’t be loaded because a required file is missing or contains errors. “
I have mounted the captured image using DISM command
  • dism /mount-wim /wimfile:”E:ISOImagesCapture Image1CapIMG.wim” /index:1 /mountdir:”C:CapIMGMount”
Tried to unmount the image
  • dism /unmount-wim /mountdir:”C:CapIMGMount” /commit
Unmount was not successful and I have discarded the Mount point 
  • dism /unmount-wim /mountdir:”C:CapIMGMount” /discard
This time successfully unmounted the disk and tried again. 
It was working fine with MDT.