Cloud Native Applications implementation using container technology is hardly to ignore if you want to keep up with this culture of agile and fast innovations. VMware have two approaches to support for this initiative. Either to use vSphere Integrated Container approach or VMware Photon Platform approach.
So, what are the differences? In Summary:
- If you want to run both containerized and traditional workloads in production side by side on your existing infrastructure, VIC is the ideal choice. VIC extends all the enterprise capabilities of vSphere without requiring additional investment in retooling or re-architecting your existing infrastructure.
- If you are looking at building an on-prem, green field infrastructure stack for only running containerized workloads, and also would like a highly available and scalable control plane, an API-driven, automated DevOps environment, plus multi-tenancy for creation and isolation resources, Photon Platform is the way to go.
In this couple of weeks, I will elaborate more about this cloud native applications. Please wait for my next posts.
So, these are the plan:
1. Run Docker Apps in the laptop (for my case, I will use Mac)
We will utilise: Mac OS, Docker, Swarm.
2. Run Docker Apps in vSphere Integrated Container
We will utilise: VMware vSphere, vCenter, Photon OS, Harbor, Admiral.
3. Run Docker Apps in VMware Photon Platform
We will utilise: VMware vSphere, Photon Controller, Photon OS, Kubernetes
I have some testings couple of times about this. In a Business Critical Applications, Telco Workloads Applications (Network Function Virtualisation (NFV)), or High CPU intensive applications (without high up and down intensity of CPU workloads), it is always recommended to do dimensioning of 1 vCPU compare to 1 pCPU. Regardless we have the performance benefit from Hyperthread technology around 25% because of the scheduling enhancement from intel processor.
For IT workloads (such as email, web apps, normal apps, etc) we can give better ratio such as 1 pCPU to 4 vCPU or even 1:10 or I also see some 1:20 of the production environments. Due to the VMs will not burst at the same time with a stable and long transactions per second.
These are some tests that I have for Network Function Virtualisation platform, we are pushing one of Telco workloads applications (messages) using Spirent as performance load tester to our VNF (telco VM) which run on the intel servers.
Known Fact for Host and VM during the Test:
- Configuration of the Host = 20 cores x 2.297 GHz = 45,940 MHz
- Configuration of the VM = 10 vCPU x 2.297 GHz = 22,297 MHz
- Only 1 VM is powered on in the host (for testing purpose only to avoid contention)
Observation of Host CPU performance:
- Max Host during Test Performance (Hz)= 12,992 MHz of total 45,940 MHz
- Max Host during Test Performance (%)= 28.27 % of total 45,940 MHz
Observation of VM CPU performance:
- Max VM during Test Performance (Hz)= 12,367 MHz of total 22,297 MHz
- Max VM during Test Performance (%)= 53.83 % of total 22,297 MHz
- Percentage calculation is the same result as MHz calculation. Means, if we calculate percentage usage with total MHz then the result will be MHz usage.
- CPU clock speed that will be needed by VNF vendor can be calculated based on MHz or percentage calculation, as long as the functionality is considered as apple to apple comparison (need to consider the number of modules/functionality).
- From performance wise observation, this will also give better view that for NFV workloads, 1 to 1 mapping dimensioning is reflected between vCPU and pCPU —> 10 vCPU is almost the same as 10 pCPU (from MHz calculations usage scenario).
Physical CPU is physical cores that is resides in the servers. Virtual CPU is logical cores that is resides in the VMs (can benefit the hyper thread technology).
In VMware vSphere environment, why Smaller vCPU is better than Bigger vCPU (if the workloads only require few vCPU) in a fully probable contention environment?
To explain this further let’s take an example of a four pCPU host that has four VMs, three with 1 vCPU and one with 4 vCPUs. At best only the three single vCPU VMs can be scheduled concurrently. In such an instance the 4 vCPU VM would have to wait for all four pCPUs to be idle. In this example the excess vCPUs actually impose scheduling constraints and consequently degrade the VM’s overall performance, typically indicated by low CPU utilization but a high CPU Ready figure.
So, always start with smaller vCPU and then add extra vCPU later on if needed based on your observation about the workload.
This reference post also share a very good description why too many vCPU will give poor performance to your Virtual Machine: http://www.gabesvirtualworld.com/how-too-many-vcpus-can-negatively-affect-your-performance/
Conclusion: “Right Size Your VMs!”
Usually customer would like to expand the benefits that they already achieved using virtualization (financial, business and operational benefits of virtualization within its operating environment) to another level. For example to Business Critical Applications such as Oracle Database, thereby reaping the many benefits and advantages through its adoption of this infrastructure.
Customer aims to achieve the following benefits:
- Effectively utilise datacenter resources, as in traditional physical servers a lot of database server only utilize 30% of the resources.
- Maximise availability of the Oracle environment at lower cost, as virtualization can give another layer of high availability.
- Rapidly deploy Oracle database servers for development, testing & production, as virtualization can have templates and automation.
- Maximise uptime during planned maintenance, as virtualization can give the ability to move database to another machine without any downtime for the workload.
- Minimise planned and unplanned downtime, as virtualization can give better disaster recovery avoidance and disaster recovery actions.
- Automated testing and failover of Oracle datacenter environments for disaster recovery and business continuity.
- Achieve IT Compliance, as we have better monitoring systems, audit mechanism, policy enforcement, and asset managements.
- Minimise Oracle datacenter costs for floor space, energy, cooling, hardware and labour, as some physical servers can be consolidated into just several physical servers. This will give customer a better TCO/ROI compare to physical servers approach.
Following our technical discussion regarding upgrade VMware environments, actually I already wrote about this topic in different thread in this blog. But, I would like to emphasise again by using another KB from VMware. VMware has made available certain releases to address critical issues and architectural changes for several products to allow for continued interoperability:
- vCloud Connector (vCC)
- vCloud Director (vCD)
- vCloud Networking and Security (VCNS, formerly vShield Manager)
- VMware Horizon View
- VMware NSX for vSphere (NSX Manager)
- vCenter Operations Manager (vCOPs)
- vCenter Server / vCenter Server Appliance
- vCenter Infrastructure Navigator (VIN)
- vCenter Site Recovery Manager (SRM)
- vCenter Update Manager (VUM)
- vRealize Automation Center (vRA, formerly known as vCloud Automation Center)
- vRealize Automation Application Services (vRAS, formerly vSphere AppDirector)
- vRealize Business, IT Cost Management (ITBM, formerly VMware IT Business Management)
- vRealize Configuration Manager (VCM, formerly vCenter Configuration Manager)
- vRealize Hyperic
- vRealize Log Insight (vRLI)
- vRealize Operations Manager (vROPs, formerly known as vCenter Operations Manager, vCOPs)
- vRealize Orchestrator (vRO, formerly vCenter Orchestrator)
- vSphere Big Data Extension (BDE)
- vSphere Data Protection (VDP)
- vSphere Replication (VR)
- vSphere ESXi
- vShield Edge / NSX Edge
- vShield App / NSX Logical Firewall (NSX LFw)
- vShield Endpoint / NSX Guest Introspection and Data Security (NSX Guest IDS)
This article only encompasses environments running vSphere and/or vCloud Suite 6.0 and VMware products compatible with vSphere 6.0.
In an environment with vSphere 6.0 and its compatible VMware products, perform the update sequence described in the Supported Update Sequence table.
Supported Update Sequence
Continue reading Update sequence for vSphere 6.0 and its compatible VMware products
Some of my customers ask about Metro Storage Cluster configuration for VMware Deployment to achieve better availability of their precious data. There is a very good resource from Duncan Epping (one of VMware most respectful technologist). One of the topic is the Requirement and Constraints from VMware technology perspective. Well, this is the explanation taken from the whitepaper.
Technical Requirements and Constraints
Due to the technical constraints of an online migration of VMs, the following specific requirements, which are listed in the VMware Compatibility Guide, must be met prior to consideration of a stretched cluster implementation:
- Storage connectivity using Fibre Channel, iSCSI, NFS, and FCoE is supported.
- The maximum supported network latency between sites for the VMware ESXiTM management networks is 10ms round-trip time (RTT).
- vSphere vMotion, and vSphere Storage vMotion, supports a maximum of 150ms latency as of vSphere 6.0, but this is not intended for stretched clustering usage.
- The maximum supported latency for synchronous storage replication links is 10ms RTT. Refer to documentation from the storage vendor because the maximum tolerated latency is lower in most cases. The most commonly supported maximum RTT is 5ms.
- The ESXi vSphere vMotion network has a redundant network link minimum of 250Mbps.The storage requirements are slightly more complex. A vSphere Metro Storage Cluster requires what is in effect a single storage subsystem that spans both sites. In this design, a given datastore must be accessible—that is, be able to be read and be written to—simultaneously from both sites. Further, when problems occur, the ESXi hosts must be able to continue to access datastores from either array transparently and with no impact to ongoing storage operations.
Download the complete document from here: vmware-vsphere-metro-storage-cluster-recommended-practices-white-paper (http://www.vmware.com/content/dam/digitalmarketing/vmware/en/pdf/techpaper/vmware-vsphere-metro-storage-cluster-recommended-practices-white-paper.pdf)
I just want to write this, as a personal note for me. Since I always forget when someone ask me this question about my Personal VMware Home Lab that I used to do some researches on-premise.
As described earlier in this post: http://bicarait.com/2015/09/12/penjelasan-mengenai-my-computer-home-lab-untuk-vmware-technology/
Currently I am adding another Home Lab for my research and demo to VMware customers.
MacBook Pro Retina 15-inch, OS X El Capitan (10.11.6), Quad Core 2.5 GHz Intel i7, 16 GB Memory, NVIDIA GeForce GT750M 2GB, 1 TB Flash Storage.
- I am using VMware Fusion Professional Version 8.1.1 to create Nested Virtualisation.
- Control Server is using CentOS Linux 7 (control01.lab.bicarait.com)
Function: NTP (ntpd), DNS (bind), LDAP (openldap), DHCP (dhcpd)
Username: root, Password: VMware1!
- Shared Storage is using Openfiler 2.6 (storage01.lab.bicarait.com)
Username: openfiler, Password: password
iSCSI: iqn.2006-01.com.openfiler:tsn.a7cd1aac2554 – “fusiondisk (/mnt/fusiondisk/)” using volume name “fusioniscsi1” size 100 GB – /dev/fusiondisk/fusioniscsi1 – iSCSI target: 172.16.159.139 port 3260 – datastore: ds_fusion_01
- Virtualisation for Management Cluster is using ESXi 6.0 U2 (esxi01.lab.bicarait.com)
IP: 172.16.159.141 (vmkernel management)
Username: root, Password: VMware1!
- Virtualisation for Payload Cluster is using ESXi 6.0 U2 (esxi02.lab.bicarait.com & esxi03.lab.bicarait.com)
IP: 172.16.159.151 & 172.16.159.152 (vmkernel management)
Username: root, Password: VMware1!
- vCenter is using vCenter Appliance 6.0 U2 (vcsa01.lab.bicarait.com)
Username: firstname.lastname@example.org, Password: VMware1!
- Virtual Machines to Play with:
PhotonVM01 – IP: DHCP – Username: root, Password: VMware1!
This is the screenshot of my fusion environment:
I know that you can find this requirements in the Knowledge Based, I just want to write this again to remind me. Because I got a lot of this question from my customer.
Disk storage on the host machine
Embedded Platform Services Controller:
External Platform Services Controller:
External Platform Services Controller Appliance:
Memory in the vCenter Server Appliance
Platform Services Controller Only: 2GB Ram
All components on one Appliance.
Tiny: 8GB RAM
Small: 16GB RAM
Medium: 24GB RAM
Large: 32GB RAM
CPUs in the vCenter Server Appliance
Platform Services Controller Only: 2 CPUs
All components on one Appliance.
Tiny: 2 CPUs
Small: 4 CPUs
Medium: 8 CPUs
Large: 16 CPUs
- Tiny Environment (up to 10 Hosts, 100 Virtual Machines)
- Small Environment (up to 100 Hosts, 1,000 Virtual Machines)
- Medium Environment (up to 400 Hosts, 4,000 Virtual Machines)
- Large Environment (up to 1,000 Hosts, 10,000 Virtual Machines)
Just a recap, these are some public materials, regarding VMware Virtual SAN vs competitor as Hyperconverge battle continues.
VSAN vs Nutanix Head-to-Head Performance Testing — Part 1
VSAN vs Nutanix Head-to-Head Performance Testing — Part 2
VSAN vs Nutanix Head-to-Head Performance Testing — Part 3
VSAN vs. Nutanix — Head-to-Head Performance Testing — Part 4 — Exchange!
VSAN and The Joys Of Head-to-Head Performance Testing
Virtual SAN 6.0 Performance with VMware VMmark
VMware Virtual SAN Review: Overview and Configuration
VMware Virtual SAN Review: VMmark Performance
VMware Virtual SAN Review: Sysbench OLTP Performance
VMware Virtual SAN Review: SQL Server Performance
Why We Don’t Have a Nutanix NX-8150 Review
Btw, do you realise that there is an EULA for one of the competitor, stated that:
2.1. Limitations on Use.
You must not use the Software or Documentation except as permitted by this Agreement. You must not:
- disclose the results of testing, benchmarking or other performance or evaluation information related to the Software or the product to any third party without the prior written consent of Nutanix;
- access or use the Software or Documentation for any competitive purposes (e.g. to gain competitive intelligence; to design or build a competitive product or service, or a product providing features, functions or graphics similar to those used or provided by Nutanix; to copy any features, functions or graphics; or to monitor availability, performance or functionality for competitive purposes);
Men!!! Talk about transparency… How can we measure the competitiveness then? referring that EULA.
A new policy setting has been introduced to accommodate the new RAID-5/RAID-6 configurations in VSAN (only available in All-Flash configuration). Minimum 4 hosts will be required for RAID5, and minimum 6 hosts will be required for RAID6 configuration.
This new policy setting is called Failure Tolerance Method. This policy setting takes two values: performance and capacity. When it is left at the default value of performance, objects continue to be deployed with a RAID-1/mirror configuration for the best performance. When the setting is changed to capacity, objects are now deployed with either a RAID-5 or RAID-6 configuration.
The RAID-5 or RAID-6 configuration is determined by the number of failures to tolerate setting. If this is set to 1, the configuration is RAID-5. If this is set to 2, then the configuration is a RAID-6.