Explanation about How CPU Limit and CPU Reservation can Slow your VM (if you don't do a proper sizing and analysis)

In this post, I would like to share about CPU limit and CPU reservation configuration in vSphere ESXi virtualisation technology.
Actually those features are great (since the configuration also available in vCloud Director (*it will call the configuration in vCenter)). Those features are great if you really know and already consider on how to use it properly. For example, if you would like to use CPU reservation please make sure that you are not running those VMs in a fully contention/overcommitment environment. For CPU limit, if you have application that is always consume 100% of CPU even though you always give more CPU to the VM – then you can use Limit configuration to limit the usage of the CPU by that application (but, for me the Best Way is ask your Developer to Fix the Application!).
Okay, let’s talk more about CPU Limit.
Duncan Epping and Frank Denneman (both are the most respectable VMware blogger), once said that: “Look at a vCPU limit as a restriction within a specific time frame. When a time frame consists of 2000 units and a limit has been applied of 300 units it will take a full pass, so 300 “active” + 1700 units of waiting before it is scheduled again.”
So, applying a limit on a vCPU will slow your VM down no matter what. Even if there are no other VMs running on that 4 socket quad core host.
Next, let’s talk more about CPU Reservation.
Josh Odgers (another virtualisation blogger) also explained that CPU reservation “reserves” CPU resources measured in Mhz, but this has nothing to do with the CPU scheduler. So setting a reservation will help improve performance for the VM you set it on, but will not “solve” CPU ready issues caused by “oversized” VMs, or by too high an overcommitment ratio of CPU resources.
The configuration of Limit and Reservation are done outside the Guest OS, so your Operating System (Windows/Linux/etc) or your Application (Java/.NET/C/etc) do not know that. Your application will ask the resource based on the allocated CPU to that VM.
You should minimize the use of Limit and Reservation as it makes the operation more complex.
Conclusion:
Better use the feature of default VMkernel which already got a great scheduler functionality that will take fairness into account. Actually, you can use CPU share configuration if you want to prioritise the VM other than others.
But, the most important thing is: “Please Bro…, Right Size Your VM!”
 
Kind Regards,
Doddi Priyambodo
 

Docker Version Manager (DVM) to easily Move between Docker Client Version

Another break-time post from the continuous tutorial about cloud native applications 🙂
Sometimes when we are working in container environment, we found server’s version is not the same as client’s version. So we can not connect to the server. To easily solve this issue, we should install dvm (docker version manager) so we can easily move from one environment in our client to another.
These are the steps:

$ curl -sL https://download.getcarina.com/dvm/latest/install.sh | sh
$ source /Users/doddipriyambodo/.dvm/dvm.sh
#Usages of the commands:
$ dvm ls --> see the version in your client
$ dvm ls-remote --> see what version available to install
$ dmv install 1.12.3 --> install the client
$ dvm use 1.12.3 --> use the specified client
$ dvm deactivate --> uninstall the client

 
Kind Regards,
Doddi Priyambodo

Running your Docker in Production Environment using VMware vSphere Integrated Containers – (Part 2)

Following our tutorial, now we will continue to do the installation and configuration for those components.
So, rephrasing previous blog post. By utilising vSphere Integrated Containers, now Developers can use their docker commands to manage the development environments, also functionalities are enriched with specific container management portal (VMware Admiral) and enterprise features container registry (VMware Harbor). System administrator can still use their favourite management tool to manage the infrastructure, such as vCenter and also vRealize Operations plus Log Insight to manage the virtual infrastructure in a whole holistic view. Shown in the diagram below:

A traditional container environment use the host/server to handle several containers. Docker has the ability to import images into the host, but the resource is tied to that host. The challenge is sometime that host has a very limited set of resources. To expand resource on that host, then we need to shutdown the host and then the containers. Then we need to add resource for that physical/virtual machine before more containers can be powered deployed. Another challenge is the container is not portable as it can not be moved to another host since it is very tight to the OS kernel of the container host.
Another concerns other than resources, already explained in my earlier post regarding some enterprise features if we would like to run docker in production environment such as security, manageability, availability, diagnosis and monitoring, high availability, disaster recovery, etc. VIC (vSphere Integrated Containers) can give the solution for all those concerns by using resource pool as the container host and virtual machines as the containers. Plus with new features of vSphere 6 about Instant Clone now VIC can deliver “instant on” container experience alongside the security, portability, and isolation of Virtual Machine. Adding extra hosts in the resource pool to dynamically increase infra resources, initiate live migration/vMotion, auto placement/Distributed Resource Scheduler, dedicated placement/affinity, self healing/High Availability, QoS/weight, quota/limit, guarantee/reservation, etc will add a lot of benefits to the docker environment.
So, these are our steps to prepare the environments for vSphere Integrated Containers (VIC).

  1. Installation and configuration of vSphere Integrated Containers
  2. Installation and configuration of Harbor
  3. Installation and configuration of Admiral

So, let’s start the tutorial now.

Checking the Virtual Infrastructure Environments

  • I am running my virtualisation infrastructure in my Mac laptop using VMware Fusion Professional 8.5.1.
  • Currently I am using vSphere ESXi Enterprise Plus version 6 update 2, and vCenter Standard version 6 update 2.
  • I have NFS storage as my centralised storage, NTP, DNS and DHCP also configured in another VM.

    screen-shot-2016-11-03-at-15-32-42
    screen-shot-2016-11-04-at-15-11-52

Installation of vSphere Integrated Containers (VIC)

There are two approach to install VIC. This is the first one: (I use this to install on my laptop)

  1. Download the installation source from github = https://github.com/vmware/vic
  2. You will download the vic from the pull command using git. First install the git components from here = https://git-scm.com/downloads
  3. Run this command = $ git clone https://github.com/vmware/vicscreen-shot-2016-11-03-at-18-17-01
  4. After downloaded go to the directory = $ cd vic
  5. Now, build the binaries using this command =
    docker run -v $(pwd):/go/src/github.com/vmware/vic -w /go/src/github.com/vmware/vic golang make all
     screen-shot-2016-11-03-at-18-42-34

OR, you can do the second approach: (I use this to install on my VM)

  1. Download binary file from here = https://bintray.com/vmware/vic-repo/build
  2. In this personal lab, I am using this binary = https://bintray.com/vmware/vic-repo/build/6511#files
  3. Download that binary to the Virtual Machine that you will be used for VIC Management Host.
  4. Extract the file using = $ tar -zxvf vic_6511.tar.gz.  NOTE:You will see the latest build as shown here. The build number “6511” will be different as this is an active project and new builds are uploaded constantly.

Okay, you already installed the installer now. In those steps above, there are three primary components generated by a full build, found in the ./bin directory by defaul). The make targets used are the following:

  1. vic-machine – make vic-machine
  2. appliance.iso – make appliance
  3. bootstrap.iso – make bootstrap

Okay, after this we will Deploy our Virtual Container Host in VMware environments (I am using vCenter with ESXi as explained earlier). The installation can run on dedicated ESXi host too (without vCenter) if needed.


Now, continue to create the Virtual Container Host in the vCenter. Since I am using Mac, I will use command prompt for mac.
$ ./vic-machine-darwin create --target 172.16.159.150/dc1.lab.bicarait.com --compute-resource cls01.dc01.lab.bicarait.com --user administrator@vsphere.local --password VMware1! --image-store ds_fusion_01 --no-tlsverify --name virtualcontainerhost01 --bridge-network dvPgContainer01 --force
screen-shot-2016-11-06-at-21-37-13
After that command above, let’s check the condition of our virtual infrastructure from vCenter now. Currently we will see that we have a new resource pool as the virtual container host, and a vm as an endpoint vm as a target of the container host.
screen-shot-2016-11-06-at-21-45-38


Okay, installation is completed. Let’s try to deploy a docker machine into the VIC now.
docker -H 172.16.159.153:2376 --tls info
screen-shot-2016-11-06-at-22-24-09
After that, let’s do the pull and run command for the docker as normal operation same as my previous posts.
$ docker -H 172.16.159.153:2376 --tls \
--tlscert='./docker-appliance-cert.pem' \
--tlskey='./docker-appliance-key.pem' pull vmwarecna/nginx

$ docker -H 172.16.159.153:2376 --tls \
--tlscert='./docker-appliance-cert.pem' \
--tlskey='./docker-appliance-key.pem' run -d -p 80:80 vmwarecna/nginx

Note: for production, we must use the *.pem key to connect to the environment. Since this is my development environment, so I will skip that.

 
Okay, now finally… this is a video to explain the operational of vSphere Integrated Container, VMware Admiral, and VMware Harbor (I already explained about Admiral and Harbor in my previous blog post in here):

 
Kind Regards,
Doddi Priyambodo
 

Running your Docker in Production Environment using VMware vSphere Integrated Containers – (Part 1)


In this tutorial, after explaining about running Docker in my Mac. Now, it’s time to move those dockers on your laptop to production environment. In VMware, we will utilise vSphere ESXi as the production grade virtualisation technology as the foundation of the infrastructure.
In production environment, lot of things need to be considered. From availability, manageability, performance, reliability, scalability, security (AMPRSS). This AMPRSS considerations can be easily achieved by implementing docker container from your development environment (laptop) to the production environment (vSphere ESXi). One of the concern of docker technology is the containers share the same kernel and are therefore less isolated than real VMs. A bug in the kernel affects every container.

vSphere Integrated Containers Engine will allow developers familiar with Docker to develop in containers and deploy them alongside traditional VM-based workloads on vSphere clusters, and allowing for these workloads to be managed through the vSphere UI in a way familiar to existing vSphere admins.
Docker itself is far less capable than actual hypervisor. It doesn’t come with HA, live migration, hardware virtualization security, etc. VIC (VMware Integrated Containers) brings the container paradigm directly to the hypervisor, allowing you to deploy containers as first-class citizens. The net result is that containers inherit all of the benefits of VMs, because they are VMs. The Docker image, once instantiated, becomes a VM inside vSphere. This solves security as well as operational concerns at the same time.
But these are NOT traditional VMs that require for example 2TB and take 2 minutes to boot. These are usually as big as the Docker image itself and take a few seconds to instantiate. They boot from a minimal ISO which contains a stripped-out Linux kernel (based on Photon OS), and the container images and volumes are attached as disks.
The ContainerVMs are provisioned into a “Virtual Container Host” which is just like a Swarm cluster, but implemented as logical distributed capacity in a vSphere Resource Pool. You don’t need to add or remove physical nodes to increase or decrease the VCH capacity, you simply re-configure its resource limits and let vSphere clustering and DRS (Distributed Resource Scheduler) handle the details.
The biggest benefit of VIC is that it helps to draw a clear line between the infrastructure provider (IT admin) and the consumer (developer/ops). The consumer wins because they don’t have deal with managing container hosts, patching, configuring, etc. The provider wins because they can leverage the operational model they are already using today (including NSX and VSAN).
Developers will continue to develop dockers and IT admin will keep managing VMs. The best of both worlds.

It also can be combined with other enterprise tool to manage the Enterprise environment, such as vRealize Operations, vRealize Log Insight, Virtual SAN, VMware NSX, vRealize Automations.
In this post, I will utilise these technologies from VMware:

  • vSphere ESXi 6 U2 as the number one, well-known and stable production grade Virtualisation Technology.
  • vCenter 6 U2 as the Virtualisation central management and operation tool.
  • vSphere Integrated Containers as the Enterprise Production Ready container runtime for vSphere, allowing developers familiar with Docker to develop in containers and deploy them alongside traditional VM-based workloads on vSphere clusters. Download from here: The vSphere Integrated Containers Engine
  • VMware Admiral as the Container Management platform for deploying and managing container based applications. Provides a UI for developers and app teams to provision and manage containers, including retrieving stats and info about container instances. Cloud administrators will be able to manage container hosts and apply governance to its usage, including capacity quotas and approval workflows. Download from here: Harbor
  • VMware Harbor as an enterprise-class registry server that stores and distributes Docker images. Have a UI and functionalities usually required by an enterprise, such as security, identity, replication, and management. Download from here: Admiral

This is the diagram block for those components:

As you can see in the diagram above vSphere Integrated Containers is comprised of three main components, all of which are available as open source on github. With these three capabilities, vSphere Integrated Containers will enable VMware customers to deliver a production-ready container solution to their developers and app teams.
 
*to be continued in part 2.
Kind Regards,
Doddi Priyambodo

Running your First Cloud Native Applications using Docker Container in Mac

As previous post, I will elaborate about Cloud Native Applications. But before that, I will post some basic concepts about Docker as the Container technology for Cloud Native Applications approach.
Docker is an open platform for developing, shipping, and running applications. Docker enables you to separate your applications from your infrastructure so you can deliver software quickly. With Docker, you can manage your infrastructure in the same ways you manage your applications. By taking advantage of Docker’s methodologies for shipping, testing, and deploying code quickly, you can significantly reduce the delay between writing code and running it in production.
If you are still confused about description of Docker, Microservices, Cloud Native Applications technology means. You can read it in here: http://bicarait.com/2016/05/31/microservices-cloud-native-applications/ 
In this post, I will start with the basic on how to run your first application in Docker that will be provisioned in your Mac laptop. Then, I will do that also in vSphere Integrated Container and also VMware Photon Platform.


Let’s Start the first Chapter: INSTALLATION 

  1. Download your Docker Engine from this URL (stable version): Get Docker for Mac (stable)
  2. Actually there are two approach to run docker on your Mac. The 1st one is to utilise Docker for Mac (which we will do this), and the second one is to utilise Docker Toolbox. The difference is in Docker for Mac approach, we will utilise HyperKit as lightweight virtualisation technology to run the container. Docker Toolbox will utilise Virtualbox as the virtualisation technology.
  3. Actually you can run both Docker for Mac and Docker Toolbox approach at the same time in your MacOS, but there are several things that you need to do, such as create different environment (set and unset command). I will not elaborate that in this post.
  4. Assume that your machine is empty for Docker engine.Docker Tutorial
  5. Install and Run Docker. Double click Docker.img that you have downloaded earlier to start the installation.screen-shot-2016-10-31-at-15-29-46
  6. Check Docker version that is now running on your Mac after the installation is completed.screen-shot-2016-10-31-at-15-34-06
  7. Let’s start with your basic application. Let’s do nginx web server using docker.
  8. Check your http://localhost first to check the status.screen-shot-2016-10-31-at-15-54-28
  9. Basically, docker will try to run the source of your application locally. But if docker can not find it, then it will search through the public repository (default configuration is docker hub).screen-shot-2016-10-31-at-15-55-21
  10. Check your http://localhost now to check the status.screen-shot-2016-10-31-at-15-56-03
  11.  Check the status of the docker using  docker ps command. If you want to stop the web server, do docker stop webserver and start the web by docker start webserver
  12. If you want to stop and remove the container, use the command docker rm -f webserver. If you want to delete the local images do the command docker rmi nginx. But before that, you can list the local images using docker images.screen-shot-2016-10-31-at-16-00-14
  13. If you want to use another docker repository other than https://hub.docker.com or do a file sharing from your Mac to your docker engine, you can also configure that in the Docker for Mac menu.screen-shot-2016-10-31-at-16-17-40

Let’s Continue with the second Chapter: BOARDING YOUR APPS 

For this example we will utilise Docker Compose to run WordPress in an isolated environment. Compose is a docker tool for running multi containers environment. We will create a compose file, and then execute the YAML file using docker-compose command.

  1. Create a directory for the project in your Mac.
  2. screen-shot-2016-11-01-at-18-51-10Create a docker compose file. This will include wordpress and mysql to create a simple blog website.screen-shot-2016-11-01-at-18-53-49
  3. Now, build the project using the command $ docker-compose up -d
  4. screen-shot-2016-11-01-at-18-56-35Check whether the images already installed and run. Using docker images and docker ps command.
  5. screen-shot-2016-11-01-at-19-02-49Finally, test to open the wordpress in your browser. Because we put the configuration in port 8000, then we will open http://localhost:8000
  6. Do the installations of wordpress using the UI wizard, then finally open the created site.screen-shot-2016-11-01-at-19-01-32

 
Kind Regards,
Doddi Priyambodo

VMware Photon Platform or vSphere Integrated Container

Cloud Native Applications implementation using container technology is hardly to ignore if you want to keep up with this culture of agile and fast innovations. VMware have two approaches to support for this initiative. Either to use vSphere Integrated Container approach or VMware Photon Platform approach.
So, what are the differences? In Summary:

  • If you want to run both containerized and traditional workloads in production side by side on your existing infrastructure, VIC is the ideal choice. VIC extends all the enterprise capabilities of vSphere without requiring additional investment in retooling or re-architecting your existing infrastructure.
  • If you are looking at building an on-prem, green field infrastructure stack for only running containerized workloads, and also would like a highly available and scalable control plane, an API-driven, automated DevOps environment, plus multi-tenancy for creation and isolation resources, Photon Platform is the way to go.

In this couple of weeks, I will elaborate more about this cloud native applications. Please wait for my next posts.
So, these are the plan:
1. Run Docker Apps in the laptop (for my case, I will use Mac)
We will utilise: Mac OS, Docker, Swarm.
2. Run Docker Apps in vSphere Integrated Container
We will utilise: VMware vSphere, vCenter, Photon OS, Harbor, Admiral.
3. Run Docker Apps in VMware Photon Platform
We will utilise: VMware vSphere, Photon Controller, Photon OS, Kubernetes
 
Kind Regards,
Doddi Priyambodo

Physical CPU (pCPU) and Virtual CPU (vCPU) Ratio in VMware vSphere ESXi Environment

I have some testings couple of times about this. In a Business Critical Applications, Telco Workloads Applications (Network Function Virtualisation (NFV)), or High CPU intensive applications (without high up and down intensity of CPU workloads), it is always recommended to do dimensioning of 1 vCPU compare to 1 pCPU. Regardless we have the performance benefit from Hyperthread technology around 25% because of the scheduling enhancement from intel processor.
For IT workloads (such as email, web apps, normal apps, etc) we can give better ratio such as 1 pCPU to 4 vCPU or even 1:10 or I also see some 1:20 of the production environments. Due to the VMs will not burst at the same time with a stable and long transactions per second.
These are some tests that I have for Network Function Virtualisation platform, we are pushing one of Telco workloads applications (messages) using Spirent as performance load tester to our VNF (telco VM) which run on the intel servers.
Known Fact for Host and VM during the Test:

  • Configuration of the Host = 20 cores x 2.297 GHz = 45,940 MHz
  • Configuration of the VM = 10 vCPU x 2.297 GHz = 22,297 MHz
  • Only 1 VM is powered on in the host (for testing purpose only to avoid contention)

Observation of Host CPU performance:

  • Max Host during Test Performance (Hz)= 12,992 MHz of total 45,940 MHz
  • Max Host during Test Performance (%)= 28.27 % of total 45,940 MHz

Observation of VM CPU performance:

  • Max VM during Test Performance (Hz)= 12,367 MHz of total 22,297 MHz
  • Max VM during Test Performance (%)= 53.83 % of total 22,297 MHz

Conclusion:

  • Percentage calculation is the same result as MHz calculation. Means, if we calculate percentage usage with total MHz then the result will be MHz usage.
  • CPU clock speed that will be needed by VNF vendor can be calculated based on MHz or percentage calculation, as long as the functionality is considered as apple to apple comparison (need to consider the number of modules/functionality).
  • From performance wise observation, this will also give better view that for NFV workloads, 1 to 1 mapping dimensioning is reflected between vCPU and pCPU —> 10 vCPU is almost the same as 10 pCPU (from MHz calculations usage scenario).

Notes:
Physical CPU is physical cores that is resides in the servers. Virtual CPU is logical cores that is resides in the VMs (can benefit the hyper thread technology).
 
Kind Regards,
Doddi Priyambodo

Why Smaller vCPU is better than Bigger vCPU in a fully probable contention environment

In VMware vSphere environment, why Smaller vCPU is better than Bigger vCPU (if the workloads only require few vCPU) in a fully probable contention environment?
To explain this further let’s take an example of a four pCPU host that has four VMs, three with 1 vCPU and one with 4 vCPUs. At best only the three single vCPU VMs can be scheduled concurrently. In such an instance the 4 vCPU VM would have to wait for all four pCPUs to be idle. In this example the excess vCPUs actually impose scheduling constraints and consequently degrade the VM’s overall performance, typically indicated by low CPU utilization but a high CPU Ready figure.
So, always start with smaller vCPU and then add extra vCPU later on if needed based on your observation about the workload.
This reference post also share a very good description why too many vCPU will give poor performance to your Virtual Machine: http://www.gabesvirtualworld.com/how-too-many-vcpus-can-negatively-affect-your-performance/
Conclusion: “Right Size Your VMs!”
 
Kind Regards,
Doddi Priyambodo

Why do we need to Virtualize our Oracle Database

Usually customer would like to expand the benefits that they already achieved using virtualization (financial, business and operational benefits of virtualization within its operating environment) to another level. For example to Business Critical Applications such as Oracle Database, thereby reaping the many benefits and advantages through its adoption of this infrastructure.

Customer aims to achieve the following benefits:

  • Effectively utilise datacenter resources, as in traditional physical servers a lot of database server only utilize 30% of the resources.
  • Maximise availability of the Oracle environment at lower cost, as virtualization can give another layer of high availability.
  • Rapidly deploy Oracle database servers for development, testing & production, as virtualization can have templates and automation.
  • Maximise uptime during planned maintenance, as virtualization can give the ability to move database to another machine without any downtime for the workload.
  • Minimise planned and unplanned downtime, as virtualization can give better disaster recovery avoidance and disaster recovery actions.
  • Automated testing and failover of Oracle datacenter environments for disaster recovery and business continuity.
  • Achieve IT Compliance, as we have better monitoring systems, audit mechanism, policy enforcement, and asset managements.
  • Minimise Oracle datacenter costs for floor space, energy, cooling, hardware and labour, as some physical servers can be consolidated into just several physical servers. This will give customer a better TCO/ROI compare to physical servers approach.

 
Kind Regards,
Doddi Priyambodo
 
 

Update sequence for vSphere 6.0 and its compatible VMware products

Following our technical discussion regarding upgrade VMware environments, actually I already wrote about this topic in different thread  in this blog. But, I would like to emphasise again by using another KB from VMware. VMware has made available certain releases to address critical issues and architectural changes for several products to allow for continued interoperability:

  • vCloud Connector (vCC)
  • vCloud Director (vCD)
  • vCloud Networking and Security (VCNS, formerly vShield Manager)
  • VMware Horizon View
  • VMware NSX for vSphere (NSX Manager)
  • vCenter Operations Manager (vCOPs)
  • vCenter Server / vCenter Server Appliance
  • vCenter Infrastructure Navigator (VIN)
  • vCenter Site Recovery Manager (SRM)
  • vCenter Update Manager (VUM)
  • vRealize Automation Center (vRA, formerly known as vCloud Automation Center)
  • vRealize Automation Application Services (vRAS, formerly vSphere AppDirector)
  • vRealize Business, IT Cost Management (ITBM, formerly VMware IT Business Management)
  • vRealize Configuration Manager (VCM, formerly vCenter Configuration Manager)
  • vRealize Hyperic
  • vRealize Log Insight (vRLI)
  • vRealize Operations Manager (vROPs, formerly known as vCenter Operations Manager, vCOPs)
  • vRealize Orchestrator (vRO, formerly vCenter Orchestrator)
  • vSphere Big Data Extension (BDE)
  • vSphere Data Protection (VDP)
  • vSphere Replication (VR)
  • vSphere ESXi
  • vShield Edge / NSX Edge
  • vShield App / NSX Logical Firewall (NSX LFw)
  • vShield Endpoint / NSX Guest Introspection and Data Security (NSX Guest IDS)
This article only encompasses environments running vSphere and/or vCloud Suite 6.0 and VMware products compatible with vSphere 6.0.

In an environment with vSphere 6.0 and its compatible VMware products, perform the update sequence described in the Supported Update Sequence table.

Supported Update Sequence

Continue reading Update sequence for vSphere 6.0 and its compatible VMware products

VMware vSphere® Metro Storage Cluster Recommended Practices for VMware vSphere 6.0

Some of my customers ask about Metro Storage Cluster configuration for VMware Deployment to achieve better availability of their precious data. There is a very good resource from Duncan Epping (one of VMware most respectful technologist). One of the topic is the Requirement and Constraints from VMware technology perspective. Well, this is the explanation taken from the whitepaper.

Technical Requirements and Constraints
Due to the technical constraints of an online migration of VMs, the following specific requirements, which are listed in the VMware Compatibility Guide, must be met prior to consideration of a stretched cluster implementation:

  • Storage connectivity using Fibre Channel, iSCSI, NFS, and FCoE is supported.
  • The maximum supported network latency between sites for the VMware ESXiTM management networks is 10ms round-trip time (RTT).
  • vSphere vMotion, and vSphere Storage vMotion, supports a maximum of 150ms latency as of vSphere 6.0, but this is not intended for stretched clustering usage.
  • The maximum supported latency for synchronous storage replication links is 10ms RTT. Refer to documentation from the storage vendor because the maximum tolerated latency is lower in most cases. The most commonly supported maximum RTT is 5ms.
  • The ESXi vSphere vMotion network has a redundant network link minimum of 250Mbps.The storage requirements are slightly more complex. A vSphere Metro Storage Cluster requires what is in effect a single storage subsystem that spans both sites. In this design, a given datastore must be accessible—that is, be able to be read and be written to—simultaneously from both sites. Further, when problems occur, the ESXi hosts must be able to continue to access datastores from either array transparently and with no impact to ongoing storage operations.

Reference:
Download the complete document from here: vmware-vsphere-metro-storage-cluster-recommended-practices-white-paper (http://www.vmware.com/content/dam/digitalmarketing/vmware/en/pdf/techpaper/vmware-vsphere-metro-storage-cluster-recommended-practices-white-paper.pdf)
 
Kind Regards,
Doddi Priyambodo

Description about My VMware Home Lab in MacBook Pro

I just want to write this, as a personal note for me. Since I always forget when someone ask me this question about my Personal VMware Home Lab that I used to do some researches on-premise.
As described earlier in this post: http://bicarait.com/2015/09/12/penjelasan-mengenai-my-computer-home-lab-untuk-vmware-technology/
Currently I am adding another Home Lab for my research and demo to VMware customers.
MacBook Pro Retina 15-inch, OS X El Capitan (10.11.6), Quad Core 2.5 GHz Intel i7, 16 GB Memory, NVIDIA GeForce GT750M 2GB, 1 TB Flash Storage.
Detail Components:

  • I am using VMware Fusion Professional Version 8.1.1 to create Nested Virtualisation.
  • Control Server is using CentOS Linux 7 (control01.lab.bicarait.com)
    Function: NTP (ntpd), DNS (bind), LDAP (openldap), DHCP (dhcpd)
    IP: 172.16.159.142
    Username: root, Password: VMware1!
  • Shared Storage is using Openfiler 2.6 (storage01.lab.bicarait.com)
    Access: https://172.16.159.139:446/
    Username: openfiler, Password: password
    iSCSI: iqn.2006-01.com.openfiler:tsn.a7cd1aac2554 – “fusiondisk (/mnt/fusiondisk/)” using volume name “fusioniscsi1” size 100 GB – /dev/fusiondisk/fusioniscsi1 – iSCSI target: 172.16.159.139 port 3260 – datastore: ds_fusion_01
  • Virtualisation for Management Cluster is using ESXi 6.0 U2 (esxi01.lab.bicarait.com)
    IP: 172.16.159.141 (vmkernel management)
    Username: root, Password: VMware1!
  • Virtualisation for Payload Cluster is using ESXi 6.0 U2 (esxi02.lab.bicarait.com & esxi03.lab.bicarait.com)
    IP: 172.16.159.151 & 172.16.159.152 (vmkernel management)
    Username: root, Password: VMware1!
  • vCenter is using vCenter Appliance 6.0 U2 (vcsa01.lab.bicarait.com)
    IP: https://172.16.159.150/vsphere-client
    Username: administrator@vsphere.local, Password: VMware1!
  • Virtual Machines to Play with:
    PhotonVM01 – IP:  DHCP – Username: root, Password: VMware1!

This is the screenshot of my fusion environment:
screen-shot-2016-11-03-at-15-32-42
screen-shot-2016-11-04-at-15-11-52
 
Kind Regards,
Doddi Priyambodo

Kebutuhan Minimum dari VMware vCenter Appliance 6.x

I know that you can find this requirements in the Knowledge Based, I just want to write this again to remind me. Because I got a lot of this question from my customer.

Resource
Requirement
Disk storage on the host machine
Embedded Platform Services Controller:
  • Tiny: 120GB
  • Small: 150GB
  • Medium: 300GB
  • Large: 450GB
External Platform Services Controller:
  • Tiny: 86GB
  • Small: 108GB
  • Medium: 220GB
  • Large: 280GB
External Platform Services Controller Appliance:
  • Tiny: 30GB
  • Small: 30GB
  • Medium: 30GB
  • Large: 30GB
Memory in the vCenter Server Appliance
Platform Services Controller Only: 2GB Ram
All components on one Appliance.
  • Tiny: 8GB RAM
  • Small: 16GB RAM
  • Medium: 24GB RAM
  • Large: 32GB RAM
CPUs in the vCenter Server Appliance
Platform Services Controller Only: 2 CPUs
All components on one Appliance.
  • Tiny: 2 CPUs
  • Small: 4 CPUs
  • Medium: 8 CPUs
  • Large: 16 CPUs
Notes:
  • Tiny Environment (up to 10 Hosts, 100 Virtual Machines)
  • Small Environment (up to 100 Hosts, 1,000 Virtual Machines)
  • Medium Environment (up to 400 Hosts, 4,000 Virtual Machines)
  • Large Environment (up to 1,000 Hosts, 10,000 Virtual Machines)

 
 

Hyperconverge Battle Blogs Recap – Performance Test

Just a recap, these are some public materials, regarding VMware Virtual SAN vs competitor as Hyperconverge battle continues.
VSAN vs Nutanix Head-to-Head Performance Testing — Part 1
https://blogs.vmware.com/storage/2015/06/03/vsan-vs-nutanix-head-head-performance-testing-part-1/
VSAN vs Nutanix Head-to-Head Performance Testing — Part 2
https://blogs.vmware.com/storage/2015/06/10/vsan-vs-nutanix-head-head-performance-testing-part-2/
VSAN vs Nutanix Head-to-Head Performance Testing — Part 3
https://blogs.vmware.com/storage/2015/06/12/vsan-vs-nutanix-head-head-testing-part-3/
VSAN vs. Nutanix — Head-to-Head Performance Testing — Part 4 — Exchange!
https://blogs.vmware.com/storage/2015/07/06/vsan-vs-nutanix-head-head-performance-testing-part-4-exchange/
VSAN and The Joys Of Head-to-Head Performance Testing
http://blogs.vmware.com/storage/2015/06/29/vsan-joys-head-head-performance-testing/
http://blogs.vmware.com/virtualblocks/2015/06/21/vmware-vsan-vs-nutanix-head-to-head-pricing-comparison-why-pay-more/ 
Virtual SAN 6.0 Performance with VMware VMmark
http://blogs.vmware.com/performance/2015/04/virtual-san-6-0-performance-vmware-vmmark.html
StorageReview.com:
VMware Virtual SAN Review: Overview and Configuration
VMware Virtual SAN Review: VMmark Performance
VMware Virtual SAN Review: Sysbench OLTP Performance
VMware Virtual SAN Review: SQL Server Performance
Why We Don’t Have a Nutanix NX-8150 Review
Other Blogs:
http://www.theregister.co.uk/2015/08/07/nutanix_digs_itself_into_hole_and_refuses_to_drop_the_shovel/
http://hansdeleenheer.com/when-bad-press-really-is-bad-press/
https://lonesysadmin.net/2015/08/07/three-thoughts-on-the-nutanix-storagereview-situation/
 
Btw, do you realise that there is an EULA for one of the competitor, stated that:
Use.
2.1. Limitations on Use.
You must not use the Software or Documentation except as permitted by this Agreement. You must not:

  1. disclose the results of testing, benchmarking or other performance or evaluation information related to the Software or the product to any third party without the prior written consent of Nutanix;
  2. access or use the Software or Documentation for any competitive purposes (e.g. to gain competitive intelligence; to design or build a competitive product or service, or a product providing features, functions or graphics similar to those used or provided by Nutanix; to copy any features, functions or graphics; or to monitor availability, performance or functionality for competitive purposes);

Men!!! Talk about transparency… How can we measure the competitiveness then? referring that EULA.
 
Kind Regards,
Doddi Priyambodo

VSAN Erasure Coding – Storage Based Policy Management

A new policy setting has been introduced to accommodate the new RAID-5/RAID-6 configurations in VSAN (only available in All-Flash configuration). Minimum 4 hosts will be required for RAID5, and minimum 6 hosts will be required for RAID6 configuration.
This new policy setting is called Failure Tolerance Method. This policy setting takes two values: performance and capacity. When it is left at the default value of performance, objects continue to be deployed with a RAID-1/mirror configuration for the best performance. When the setting is changed to capacity, objects are now deployed with either a RAID-5 or RAID-6 configuration.
The RAID-5 or RAID-6 configuration is determined by the number of failures to tolerate setting. If this is set to 1, the configuration is RAID-5. If this is set to 2, then the configuration is a RAID-6.
 
Kind Regards,
Doddi Priyambodo