VMware vSphere® Metro Storage Cluster Recommended Practices for VMware vSphere 6.0

Some of my customers ask about Metro Storage Cluster configuration for VMware Deployment to achieve better availability of their precious data. There is a very good resource from Duncan Epping (one of VMware most respectful technologist). One of the topic is the Requirement and Constraints from VMware technology perspective. Well, this is the explanation taken from the whitepaper.

Technical Requirements and Constraints

Due to the technical constraints of an online migration of VMs, the following specific requirements, which are listed in the VMware Compatibility Guide, must be met prior to consideration of a stretched cluster implementation:

  • Storage connectivity using Fibre Channel, iSCSI, NFS, and FCoE is supported.
  • The maximum supported network latency between sites for the VMware ESXiTM management networks is 10ms round-trip time (RTT).
  • vSphere vMotion, and vSphere Storage vMotion, supports a maximum of 150ms latency as of vSphere 6.0, but this is not intended for stretched clustering usage.
  • The maximum supported latency for synchronous storage replication links is 10ms RTT. Refer to documentation from the storage vendor because the maximum tolerated latency is lower in most cases. The most commonly supported maximum RTT is 5ms.
  • The ESXi vSphere vMotion network has a redundant network link minimum of 250Mbps.The storage requirements are slightly more complex. A vSphere Metro Storage Cluster requires what is in effect a single storage subsystem that spans both sites. In this design, a given datastore must be accessible—that is, be able to be read and be written to—simultaneously from both sites. Further, when problems occur, the ESXi hosts must be able to continue to access datastores from either array transparently and with no impact to ongoing storage operations.

Reference:
Download the complete document from here: vmware-vsphere-metro-storage-cluster-recommended-practices-white-paper (http://www.vmware.com/content/dam/digitalmarketing/vmware/en/pdf/techpaper/vmware-vsphere-metro-storage-cluster-recommended-practices-white-paper.pdf)

 

Kind Regards,
Doddi Priyambodo

Be Sociable, Share Tweet about this on TwitterShare on LinkedInShare on FacebookEmail this to someone

How to create an autostart script in Linux (Red Hat / CentOS / Fedora)

How to create an auto start script in Linux (Red Hat / CentOS / Fedora):

Option 1: Use chkconfig script to run /etc/init.d/…

  1. Create a script and place in /etc/init.d (e.g /etc/init.d/myscript). The script should have the following format:
#!/bin/bash
# chkconfig: 2345 20 80
# Source function library.
. /etc/init.d/functions
start() {
    # code to start app comes here 
    # example: daemon program_name &
}
stop() {
    # code to stop app comes here 
    # example: killproc program_name
}
case "$1" in 
    start)
       start
       ;;
    stop)
       stop
       ;;
    restart)
       stop
       start
       ;;
    status)
       # code to check status of app comes here 
       # example: status program_name
       ;;
    *)
       echo "Usage: $0 {start|stop|status|restart}"
esac
exit 0 

Enable the script

  1. $ chkconfig --add myscript 
    $ chkconfig --level 2345 myscript on 
    
  2. Check the script is indeed enabled – you should see “on” for the levels you selected.
    $ chkconfig --list | grep myscript
    

You can then use the script like so /etc/init.d/myscript start or chkconfig myscript start.

Option 2: Another Option is to use crontab job and run it at Boot time.

You need to use special string called @reboot. It will run once, at startup after reboot command.

@reboot  /path/to/job
@reboot  /path/to/shell.script
@reboot  /path/to/command

This is an easy way to give your users the ability to run a shell script or command at boot time without root access. First, run crontab command:
$ crontab -e
OR
# crontab -e -u doddi

Run a script called /home/doddi/bin/myScript.sh
@reboot /home/doddi/bin/myScript.sh 

Under RHEL / CentOS / Fedora, you need to enable crond on boot:
# chkconfig crond on
# service crond restart

If you are using modern distro with systemd, try
# systemctl enable crond.service
# systemctl restart crond.service
# systemctl status crond.service

 

Be Sociable, Share Tweet about this on TwitterShare on LinkedInShare on FacebookEmail this to someone

Description about My VMware Home Lab in MacBook Pro

I just want to write this, as a personal note for me. Since I always forget when someone ask me this question about my Personal VMware Home Lab that I used to do some researches on-premise.

As described earlier in this post: http://bicarait.com/2015/09/12/penjelasan-mengenai-my-computer-home-lab-untuk-vmware-technology/
Currently I am adding another Home Lab for my research and demo to VMware customers.

MacBook Pro Retina 15-inch, OS X El Capitan (10.11.6), Quad Core 2.5 GHz Intel i7, 16 GB Memory, NVIDIA GeForce GT750M 2GB, 1 TB Flash Storage.

Detail Components:

  • I am using VMware Fusion Professional Version 8.1.1 to create Nested Virtualisation.
  • Control Server is using CentOS Linux 7 (control01.lab.bicarait.com)
    Function: NTP (ntpd), DNS (bind), LDAP (openldap), DHCP (dhcpd)
    IP: 172.16.159.142
    Username: root, Password: VMware1!
  • Shared Storage is using Openfiler 2.6 (storage01.lab.bicarait.com)
    Access: https://172.16.159.139:446/
    Username: openfiler, Password: password
    iSCSI: iqn.2006-01.com.openfiler:tsn.a7cd1aac2554 – “fusiondisk (/mnt/fusiondisk/)” using volume name “fusioniscsi1” size 100 GB – /dev/fusiondisk/fusioniscsi1 – iSCSI target: 172.16.159.139 port 3260 – datastore: ds_fusion_01
  • Virtualisation for Management Cluster is using ESXi 6.0 U2 (esxi01.lab.bicarait.com)
    IP: 172.16.159.141 (vmkernel management)
    Username: root, Password: VMware1!
  • Virtualisation for Payload Cluster is using ESXi 6.0 U2 (esxi02.lab.bicarait.com & esxi03.lab.bicarait.com)
    IP: 172.16.159.151 & 172.16.159.152 (vmkernel management)
    Username: root, Password: VMware1!
  • vCenter is using vCenter Appliance 6.0 U2 (vcsa01.lab.bicarait.com)
    IP: https://172.16.159.150/vsphere-client
    Username: administrator@vsphere.local, Password: VMware1!
  • Virtual Machines to Play with:
    PhotonVM01 – IP:  DHCP – Username: root, Password: VMware1!

This is the screenshot of my fusion environment:

screen-shot-2016-11-03-at-15-32-42

screen-shot-2016-11-04-at-15-11-52

 

Kind Regards,
Doddi Priyambodo

Be Sociable, Share Tweet about this on TwitterShare on LinkedInShare on FacebookEmail this to someone

Kebutuhan Minimum dari VMware vCenter Appliance 6.x

I know that you can find this requirements in the Knowledge Based, I just want to write this again to remind me. Because I got a lot of this question from my customer.

Resource
Requirement
Disk storage on the host machine
Embedded Platform Services Controller:
  • Tiny: 120GB
  • Small: 150GB
  • Medium: 300GB
  • Large: 450GB
External Platform Services Controller:
  • Tiny: 86GB
  • Small: 108GB
  • Medium: 220GB
  • Large: 280GB
External Platform Services Controller Appliance:
  • Tiny: 30GB
  • Small: 30GB
  • Medium: 30GB
  • Large: 30GB
Memory in the vCenter Server Appliance

Platform Services Controller Only: 2GB Ram

All components on one Appliance.

  • Tiny: 8GB RAM
  • Small: 16GB RAM
  • Medium: 24GB RAM
  • Large: 32GB RAM
CPUs in the vCenter Server Appliance

Platform Services Controller Only: 2 CPUs

All components on one Appliance.

  • Tiny: 2 CPUs
  • Small: 4 CPUs
  • Medium: 8 CPUs
  • Large: 16 CPUs
Notes:
  • Tiny Environment (up to 10 Hosts, 100 Virtual Machines)
  • Small Environment (up to 100 Hosts, 1,000 Virtual Machines)
  • Medium Environment (up to 400 Hosts, 4,000 Virtual Machines)
  • Large Environment (up to 1,000 Hosts, 10,000 Virtual Machines)

 

 

Be Sociable, Share Tweet about this on TwitterShare on LinkedInShare on FacebookEmail this to someone

Hyperconverge Battle Blogs Recap – Performance Test

Just a recap, these are some public materials, regarding VMware Virtual SAN vs competitor as Hyperconverge battle continues.

VSAN vs Nutanix Head-to-Head Performance Testing — Part 1

https://blogs.vmware.com/storage/2015/06/03/vsan-vs-nutanix-head-head-performance-testing-part-1/

VSAN vs Nutanix Head-to-Head Performance Testing — Part 2

https://blogs.vmware.com/storage/2015/06/10/vsan-vs-nutanix-head-head-performance-testing-part-2/

VSAN vs Nutanix Head-to-Head Performance Testing — Part 3

https://blogs.vmware.com/storage/2015/06/12/vsan-vs-nutanix-head-head-testing-part-3/

VSAN vs. Nutanix — Head-to-Head Performance Testing — Part 4 — Exchange!

https://blogs.vmware.com/storage/2015/07/06/vsan-vs-nutanix-head-head-performance-testing-part-4-exchange/

VSAN and The Joys Of Head-to-Head Performance Testing

http://blogs.vmware.com/storage/2015/06/29/vsan-joys-head-head-performance-testing/

http://blogs.vmware.com/virtualblocks/2015/06/21/vmware-vsan-vs-nutanix-head-to-head-pricing-comparison-why-pay-more/ 

Virtual SAN 6.0 Performance with VMware VMmark

http://blogs.vmware.com/performance/2015/04/virtual-san-6-0-performance-vmware-vmmark.html

StorageReview.com:

VMware Virtual SAN Review: Overview and Configuration

VMware Virtual SAN Review: VMmark Performance

VMware Virtual SAN Review: Sysbench OLTP Performance

VMware Virtual SAN Review: SQL Server Performance

Why We Don’t Have a Nutanix NX-8150 Review

Other Blogs:

http://www.theregister.co.uk/2015/08/07/nutanix_digs_itself_into_hole_and_refuses_to_drop_the_shovel/

http://hansdeleenheer.com/when-bad-press-really-is-bad-press/

https://lonesysadmin.net/2015/08/07/three-thoughts-on-the-nutanix-storagereview-situation/

 

Btw, do you realise that there is an EULA for one of the competitor, stated that:

Use.
2.1. Limitations on Use.

You must not use the Software or Documentation except as permitted by this Agreement. You must not:

  1. disclose the results of testing, benchmarking or other performance or evaluation information related to the Software or the product to any third party without the prior written consent of Nutanix;
  2. access or use the Software or Documentation for any competitive purposes (e.g. to gain competitive intelligence; to design or build a competitive product or service, or a product providing features, functions or graphics similar to those used or provided by Nutanix; to copy any features, functions or graphics; or to monitor availability, performance or functionality for competitive purposes);

Men!!! Talk about transparency… How can we measure the competitiveness then? referring that EULA.

 

Kind Regards,
Doddi Priyambodo

Be Sociable, Share Tweet about this on TwitterShare on LinkedInShare on FacebookEmail this to someone

VSAN Erasure Coding – Storage Based Policy Management

A new policy setting has been introduced to accommodate the new RAID-5/RAID-6 configurations in VSAN (only available in All-Flash configuration). Minimum 4 hosts will be required for RAID5, and minimum 6 hosts will be required for RAID6 configuration.

This new policy setting is called Failure Tolerance Method. This policy setting takes two values: performance and capacity. When it is left at the default value of performance, objects continue to be deployed with a RAID-1/mirror configuration for the best performance. When the setting is changed to capacity, objects are now deployed with either a RAID-5 or RAID-6 configuration.

The RAID-5 or RAID-6 configuration is determined by the number of failures to tolerate setting. If this is set to 1, the configuration is RAID-5. If this is set to 2, then the configuration is a RAID-6.

 

Kind Regards,
Doddi Priyambodo

Be Sociable, Share Tweet about this on TwitterShare on LinkedInShare on FacebookEmail this to someone

What is Hadoop? Why do we need to virtualize it using VMware?

What is Hadoop?

Hadoop is an Apache open source project that provides scalable and distributed computing, originally built by Yahoo!. It provides a framework that can process large amounts of data by leveraging the parallel and distributed processing of many compute nodes arrayed in a cluster. These clusters can be configured as a single host or scaled up to utilize thousands of machines depending on the workload.

What are Hadoop Components?

These are the core modules of Hadoop, which build the capabilities to conduct distributed computing capabilities.

  • Hadoop Common – The utilities that support the other Hadoop modules.
  • Hadoop Distributed File System – The distributed file system used by most Hadoop distributions . Also known by its initials, HDFS.
  • Hadoop YARN – Used to manage cluster resources and schedule jobs.
  • Hadoop Map Reduce – YARN based system of processing large amounts of data.

In addition to the core modules, there are others that provide specific and specialized capabilities to this distributed processing framework. These are just some of the tools:

  • Ambari – A web-based tool for provisioning, management, and monitoring of Hadoop clusters.
  • HBase – Distributed database that supports structured data storage.
  • Hive – Data warehouse model with data summarization and ad hoc query capability.
  • Pig – Data flow language.
  • ZooKeeper – Coordination service for distributed applications.

These are modules available from the Apache open-source project, but there are also more than thirty  companies that provide Hadoop distributions that include the open-source code as well as adding competing management solutions, processing engines, and many other features.  Some of the best known and widest used are distributed from Cloudera, MapR, and Hortonworks.

Why do we need to Virtualize Hadoop workloads?

Now, after we know about Hadoop. We always discuss about virtualization in this blog. Is hadoop suitable to be virtualized? Yes, if you would like to have these additional values for Hadoop. Then you should consider to virtualize the workload.

  • Better resource utilization:
    Collocating virtual machines containing Hadoop roles with virtual machines containing different workloads on the same set of VMware ESXi™ server hosts can balance the use of the system. This leads to lower operating expenses and lower capital expenses as you can leverage the existing infrastructure and skills in the data center and you do not have to invest in bare-metal servers for your Hadoop deployment.
  • Alternative storage options:
    Originally, Hadoop was developed with local storage in mind, and this type of storage scheme can be used with vSphere also. The shared storage that is frequently used as a basis for vSphere can also be leveraged for Hadoop workloads. This re-enforces leveraging the existing investment in storage technologies for greater efficiencies in the enterprise.
  • Isolation:
    This includes running different versions of Hadoop itself on the same cluster or running Hadoop alongside other applications, forming an elastic environment, or different Hadoop tenant. Isolation can reduce your overall security risk, ensure you are meeting your SLA’s, and support Hadoop as a service back to the lines of business.
  • Availability and fault tolerance:
    The NameNode, the Resource Manager and other Hadoop components, such as Hive Metastore and HCatalog, can be single points of failure in a system. vSphere services such as VMware vSphere High Availability (vSphere HA) and VMware vSphere Fault Tolerance (vSphere FT) can protect these components from server failure and improve availability.
  • Balance the loads:
    Resource management tools such as VMware vSphere vMotion® and VMware vSphere Distributed Resource Scheduler™ (vSphere DRS) can provide availability during planned maintenance and can be used to balance the load across the vSphere cluster.
  • Business critical applications:
    Uptime consideration is just as important in a Hadoop environment, why would the enterprise want to go back in time to a place where the servers and server components were single points of failure. Leverage the existing investment in vSphere to drive meeting SLA’s and providing an excellent service back to the business.

VMware also have the component called VMWARE BIG DATA EXTENSIONS (https://www.vmware.com/products/big-data-extensions), to rapidly deploy High Available Hadoop components and easily manage the infrastructure workloads

vSphere Big Data Extensions enables rapid deployment, management, and scalability of Hadoop in virtual and cloud environments. It also has the functionality to do scale in/out capabilities built into Big Data Extensions tools enables on-demand Hadoop instances.

Simple cloning to sophisticated end-user provisioning products such as VMware vRealize Automation™ can speed up the deployment of Hadoop. This enables IT to be a service provider back to the business and provide Hadoop as a service back to the different lines of business, providing faster time to market. This will further enable today’s IT to be a value driver vs. seen as a cost center.

For more detail about VMware Big Data Extensions, please see this datasheet from VMware Inc. = https://www.vmware.com/files/pdf/products/vsphere/VMware-vSphere-Big-Data-Extensions-Datasheet.pdf

 

Kind Regards,
Doddi Priyambodo

Be Sociable, Share Tweet about this on TwitterShare on LinkedInShare on FacebookEmail this to someone

CIO Point of View = “R E S T”

There are four major C-Level business issues that all CxO level would like to solve.   They are:  R.E.S.T.

R = Revenue
E = Expense
S = Security & Compliance
T – Time to Market

Kind Regards,
Doddi Priyambodo

Be Sociable, Share Tweet about this on TwitterShare on LinkedInShare on FacebookEmail this to someone

Installation and Documentation Guide for VMware SDDC Proof of Concept

POC Installation and Documentation generally available online both in VMware website and in different blogs, but these are some recommendations:

Google.com and VMware.com of course…

 

Kind Regards,
Doddi Priyambodo

Be Sociable, Share Tweet about this on TwitterShare on LinkedInShare on FacebookEmail this to someone

Summary How To Upgrade VMware vCAC 6.1 to VMware vRA 6.2

Berikut ini saya lampirkan mekanisme untuk Upgrade VCAC 6.1 ke VRA 6.2. (*notes:  saat ini versi terbaru adalah VRA 7, dengan enhancement yang jauuuhhhh lebih banyak dibanding versi sebelumnya).

Anyway, berikut ini adalah Summary cara upgrade-nya:

  1. Backup Identity Appliance, vRA Appliance(s), IaaS database, and IaaS server(s). Lebih amannya: Lakukan Cloning terhadap beberapa VMs ini (sebagai backup)
  • vCenter
  • DB vCenter (jika terpisah)
  • Identity Appliance
  • VRA Appliance
  • IaaS Server
  • IaaS Database
  1. Stop vRA Appliance(s) Services.
  2. Stop IaaS Server(s) services.
  3. Identity Appliance Upgrade.
  4. vRA Appliance Upgrade.
  5. IaaS Database Upgrade.
  6. IaaS Upgrade

Terlampir dokumen resmi How to Upgrade. (Better dibaca dulu dokumennya, baru dicoba mengikuti tutorial dari beberapa blog dibawah ini) :

http://pubs.vmware.com/vra-62/topic/com.vmware.ICbase/PDF/vrealize-automation-62-upgrading.pdf

  • http://emadyounis.com/vrealize-automation/upgrading-vrealize-automation-6-1-formally-vcac-to-6-2/
  • http://www.vmdaemon.com/2014/12/upgrading-vcac-6-1-vra-6-2/
  • http://vdm-001.blogspot.co.id/2015/01/upgrade-vcac-61-to-vrealize-automation.html
  • http://theithollow.com/2014/12/16/vrealize-automation-6-2-upgrade/

Untuk instalasi/deploy baru VRA 7, bisa mengikuti blog ini: (lebih simple daripada install 6.1)

 

 

Kind Regards,
Doddi Priyambodo

Be Sociable, Share Tweet about this on TwitterShare on LinkedInShare on FacebookEmail this to someone