Mengenal release terbaru dari VMware vSphere versi 6.5

Pada bulan October 2016 kemarin VMware mengenalkan vSphere seri terbaru yaitu versi 6.5. Pada tanggal 16 November 2016 software tersebut sudah bisa di-download publicly. Well, seperti biasa banyak sekali enhancements yang dilakukan pada software virtualisasi ini pada setiap versi terbarunya yang sangat sulit dikejar oleh competitors. Beberapa diantaranya adalah:

  1. Sangat mudah dan simple untuk digunakan (ex: enhancement dari vCenter)
  2. Fitur security yang “Built-In” langsung dari vSphere (ex: fitur baru VM & vMotion Encryption)
  3. Platform aplikasi yang universal (ex: optimise untuk vSphere Integrated Container)
  4. Operasi yang jauh lebih reliable (ex: enhancement dari HA, DRS, vROPS)

Para posting selanjutnya saya akan drill down lebih mendalam dan screenshots dari tampilan versi terbaru ini langsung dari my personal lab environment.

Kind Regards,
Doddi Priyambodo

Oracle Database Standard Edition 2 Compare to other Editions

If you read this blog, I also have specific part talking about Oracle Database. Several years ago actually I was also an Oracle Database Administrator for Oracle 9i, 10gR2, and 11gR2 doing operational such as architecture design, deployment, performance tuning, backup, replication, clustering, and PL/SQL programming. But, currently I found cloud technology is more interesting than on-premise database technology. So, that’s one of the reason why I move my focus to Cloud Technology (read, VMware). Anyway, now the current version of Oracle Database available is 12.1.0.2 (12cR1).

In this post I would like to elaborate more regarding new licensing scheme from Oracle since 12.1.0.2 version came. The introduction of Oracle Standard Edition 2 version. This is a brief explanation from Oracle’s license document:

Oracle Database Standard Edition 2 may only be licensed on servers that have a maximum capacity of 2 sockets. When used with Oracle Real Application Clusters, Oracle Database Standard Edition 2 may only be licensed on a maximum of 2 one-socket servers. In addition, notwithstanding any provision in Your Oracle license agreement to the contrary, each Oracle Database Standard Edition 2 database may use a maximum of 16 CPU threads at any time. When used with Oracle Real Application Clusters, each Oracle Database Standard Edition 2 database may use a maximum of 8 CPU threads per instance at any time. The minimums when licensing by Named User Plus (NUP) metric are 10 NUP licenses per server.

These are some notes for the customer after reading the statement above, and other notes:

  • Oracle Database Standard Edition 2 (SE2) will replace SE and SE1 from version 12.1.0.2
  • SE2 will have a limitation of maximum 2 socket systems and a total of 16 CPU threads*
    • *note not cores!
    • SE2 is hard coded in Resource Manager to use no more than 16 CPU threads.
  • RAC is till included with SE2 but is restricted to 2 sockets across the cluster. Therefore, each server must be single socket.
  • SE One and SE will no longer be available to purchase from 10th November 2015.
  • If you need to purchase additional DB SE and SE One Licenses you must purchase SE2 instead and install the version of 11g as required from here. Note – you must still comply with the license rules for SE2.
  • Oracle is offering a FREE license migration from SE One* and SE to SE2.
    • *SE One customers will have to pay a 20% increase in support as part of the migration.
    • SE customers face no other cost increases for license or support, subject to Named User minimums being met.
  • Named user minimums for SE2 are now 10 per server
  • 12.1.0.1 was the last SE and SE1 release
  • 12.1.0.1 SE and SE1 customers will have 6 months of patching support once SE2 12.1.0.2 is released with quarterly patches still being available in Oct 2015 and Jan 2016.

Now, compare to other versions. These are the features that is available in SE2 compare to Oracle Database Enterprise Edition:

Continue reading Oracle Database Standard Edition 2 Compare to other Editions

Download VMware Products Datasheet (Bundle and per-Item)

Initially, I don’t know why I am posting this article because this will make some redundancies to other contents in the internet. Hmmm, well maybe because some customers always ask me about the data sheets of VMware products, then I think it will be easier if I just tell them about this post rather than they google it and download them one by one.

VMware Bundle Components Datasheet:

– VMware vCloud Suite Datasheet : (Download Here)
– VMware vRealize Suite Datasheet : (Download Here)
– VMware vCloud NFV : (Download Here)

VMware per-product Components Datasheet:

– VMware vSphere : (Download Here)
– VMware vCenter : (Download Here)
– VMware vCloud Director for SP : (Download Here)
– VMware vRealize Automation : (Download Here)
– VMware vRealize Operations : (Download Here)
– VMware vRealize Business for Cloud  : (Download Here)
– VMware Site Recovery Manager : (Download Here)
– VMware NSX : (Download Here)
– VMware vSAN: (Download Here)

Notes: there are still other offers from VMware such as Cloud Foundation, vSphere Integrated Containers, vRealize Code Stream, vSphere Integrated Openstack, vRealize Log Insight, vRealize Network Insight, Workspace One, Horizon, Airwatch, etc (… please refer to http://www.vmware.com for more detail).

Conclusion:

After reading this post, now maybe some of you just know that VMware is not just vSphere ESXi + vCenter right? 🙂

Yeah, it’s the Software-Defined Data Center

VMware, a global leader in cloud infrastructure and business mobility, accelerates our customers’ digital transformation journey by enabling enterprises to master a software-defined approach to business and IT. With VMware solutions, organizations are improving business agility by modernizing data centers, driving innovation with modern data and apps, creating exceptional experiences by mobilizing everything, and safeguarding customer trust with a defense-in-depth approach to cybersecurity.

 

Kind Regards,
Doddi Priyambodo

Explanation about How CPU Limit and CPU Reservation can Slow your VM (if you don’t do a proper sizing and analysis)

In this post, I would like to share about CPU limit and CPU reservation configuration in vSphere ESXi virtualisation technology.

Actually those features are great (since the configuration also available in vCloud Director (*it will call the configuration in vCenter)). Those features are great if you really know and already consider on how to use it properly. For example, if you would like to use CPU reservation please make sure that you are not running those VMs in a fully contention/overcommitment environment. For CPU limit, if you have application that is always consume 100% of CPU even though you always give more CPU to the VM – then you can use Limit configuration to limit the usage of the CPU by that application (but, for me the Best Way is ask your Developer to Fix the Application!).

Okay, let’s talk more about CPU Limit.

Duncan Epping and Frank Denneman (both are the most respectable VMware blogger), once said that: “Look at a vCPU limit as a restriction within a specific time frame. When a time frame consists of 2000 units and a limit has been applied of 300 units it will take a full pass, so 300 “active” + 1700 units of waiting before it is scheduled again.”

So, applying a limit on a vCPU will slow your VM down no matter what. Even if there are no other VMs running on that 4 socket quad core host.

Next, let’s talk more about CPU Reservation.

Josh Odgers (another virtualisation blogger) also explained that CPU reservation “reserves” CPU resources measured in Mhz, but this has nothing to do with the CPU scheduler. So setting a reservation will help improve performance for the VM you set it on, but will not “solve” CPU ready issues caused by “oversized” VMs, or by too high an overcommitment ratio of CPU resources.

The configuration of Limit and Reservation are done outside the Guest OS, so your Operating System (Windows/Linux/etc) or your Application (Java/.NET/C/etc) do not know that. Your application will ask the resource based on the allocated CPU to that VM.
You should minimize the use of Limit and Reservation as it makes the operation more complex.

Conclusion:

Better use the feature of default VMkernel which already got a great scheduler functionality that will take fairness into account. Actually, you can use CPU share configuration if you want to prioritise the VM other than others.

But, the most important thing is: “Please Bro…, Right Size Your VM!”

 

Kind Regards,
Doddi Priyambodo

 

VMware Photon Platform or vSphere Integrated Container

Cloud Native Applications implementation using container technology is hardly to ignore if you want to keep up with this culture of agile and fast innovations. VMware have two approaches to support for this initiative. Either to use vSphere Integrated Container approach or VMware Photon Platform approach.

So, what are the differences? In Summary:

  • If you want to run both containerized and traditional workloads in production side by side on your existing infrastructure, VIC is the ideal choice. VIC extends all the enterprise capabilities of vSphere without requiring additional investment in retooling or re-architecting your existing infrastructure.
  • If you are looking at building an on-prem, green field infrastructure stack for only running containerized workloads, and also would like a highly available and scalable control plane, an API-driven, automated DevOps environment, plus multi-tenancy for creation and isolation resources, Photon Platform is the way to go.

In this couple of weeks, I will elaborate more about this cloud native applications. Please wait for my next posts.

So, these are the plan:
1. Run Docker Apps in the laptop (for my case, I will use Mac)
We will utilise: Mac OS, Docker, Swarm.
2. Run Docker Apps in vSphere Integrated Container
We will utilise: VMware vSphere, vCenter, Photon OS, Harbor, Admiral.
3. Run Docker Apps in VMware Photon Platform
We will utilise: VMware vSphere, Photon Controller, Photon OS, Kubernetes

 

Kind Regards,
Doddi Priyambodo

VMware vSphere® Metro Storage Cluster Recommended Practices for VMware vSphere 6.0

Some of my customers ask about Metro Storage Cluster configuration for VMware Deployment to achieve better availability of their precious data. There is a very good resource from Duncan Epping (one of VMware most respectful technologist). One of the topic is the Requirement and Constraints from VMware technology perspective. Well, this is the explanation taken from the whitepaper.

Technical Requirements and Constraints

Due to the technical constraints of an online migration of VMs, the following specific requirements, which are listed in the VMware Compatibility Guide, must be met prior to consideration of a stretched cluster implementation:

  • Storage connectivity using Fibre Channel, iSCSI, NFS, and FCoE is supported.
  • The maximum supported network latency between sites for the VMware ESXiTM management networks is 10ms round-trip time (RTT).
  • vSphere vMotion, and vSphere Storage vMotion, supports a maximum of 150ms latency as of vSphere 6.0, but this is not intended for stretched clustering usage.
  • The maximum supported latency for synchronous storage replication links is 10ms RTT. Refer to documentation from the storage vendor because the maximum tolerated latency is lower in most cases. The most commonly supported maximum RTT is 5ms.
  • The ESXi vSphere vMotion network has a redundant network link minimum of 250Mbps.The storage requirements are slightly more complex. A vSphere Metro Storage Cluster requires what is in effect a single storage subsystem that spans both sites. In this design, a given datastore must be accessible—that is, be able to be read and be written to—simultaneously from both sites. Further, when problems occur, the ESXi hosts must be able to continue to access datastores from either array transparently and with no impact to ongoing storage operations.

Reference:
Download the complete document from here: vmware-vsphere-metro-storage-cluster-recommended-practices-white-paper (http://www.vmware.com/content/dam/digitalmarketing/vmware/en/pdf/techpaper/vmware-vsphere-metro-storage-cluster-recommended-practices-white-paper.pdf)

 

Kind Regards,
Doddi Priyambodo

How to create an autostart script in Linux (Red Hat / CentOS / Fedora)

How to create an auto start script in Linux (Red Hat / CentOS / Fedora):

Option 1: Use chkconfig script to run /etc/init.d/…

  1. Create a script and place in /etc/init.d (e.g /etc/init.d/myscript). The script should have the following format:
#!/bin/bash
# chkconfig: 2345 20 80
# Source function library.
. /etc/init.d/functions
start() {
    # code to start app comes here 
    # example: daemon program_name &
}
stop() {
    # code to stop app comes here 
    # example: killproc program_name
}
case "$1" in 
    start)
       start
       ;;
    stop)
       stop
       ;;
    restart)
       stop
       start
       ;;
    status)
       # code to check status of app comes here 
       # example: status program_name
       ;;
    *)
       echo "Usage: $0 {start|stop|status|restart}"
esac
exit 0 

Enable the script

  1. $ chkconfig --add myscript 
    $ chkconfig --level 2345 myscript on 
    
  2. Check the script is indeed enabled – you should see “on” for the levels you selected.
    $ chkconfig --list | grep myscript
    

You can then use the script like so /etc/init.d/myscript start or chkconfig myscript start.

Option 2: Another Option is to use crontab job and run it at Boot time.

You need to use special string called @reboot. It will run once, at startup after reboot command.

@reboot  /path/to/job
@reboot  /path/to/shell.script
@reboot  /path/to/command

This is an easy way to give your users the ability to run a shell script or command at boot time without root access. First, run crontab command:
$ crontab -e
OR
# crontab -e -u doddi

Run a script called /home/doddi/bin/myScript.sh
@reboot /home/doddi/bin/myScript.sh 

Under RHEL / CentOS / Fedora, you need to enable crond on boot:
# chkconfig crond on
# service crond restart

If you are using modern distro with systemd, try
# systemctl enable crond.service
# systemctl restart crond.service
# systemctl status crond.service

 

Kebutuhan Minimum dari VMware vCenter Appliance 6.x

I know that you can find this requirements in the Knowledge Based, I just want to write this again to remind me. Because I got a lot of this question from my customer.

Resource
Requirement
Disk storage on the host machine
Embedded Platform Services Controller:
  • Tiny: 120GB
  • Small: 150GB
  • Medium: 300GB
  • Large: 450GB
External Platform Services Controller:
  • Tiny: 86GB
  • Small: 108GB
  • Medium: 220GB
  • Large: 280GB
External Platform Services Controller Appliance:
  • Tiny: 30GB
  • Small: 30GB
  • Medium: 30GB
  • Large: 30GB
Memory in the vCenter Server Appliance

Platform Services Controller Only: 2GB Ram

All components on one Appliance.

  • Tiny: 8GB RAM
  • Small: 16GB RAM
  • Medium: 24GB RAM
  • Large: 32GB RAM
CPUs in the vCenter Server Appliance

Platform Services Controller Only: 2 CPUs

All components on one Appliance.

  • Tiny: 2 CPUs
  • Small: 4 CPUs
  • Medium: 8 CPUs
  • Large: 16 CPUs
Notes:
  • Tiny Environment (up to 10 Hosts, 100 Virtual Machines)
  • Small Environment (up to 100 Hosts, 1,000 Virtual Machines)
  • Medium Environment (up to 400 Hosts, 4,000 Virtual Machines)
  • Large Environment (up to 1,000 Hosts, 10,000 Virtual Machines)

 

 

What is Hadoop? Why do we need to virtualize it using VMware?

What is Hadoop?

Hadoop is an Apache open source project that provides scalable and distributed computing, originally built by Yahoo!. It provides a framework that can process large amounts of data by leveraging the parallel and distributed processing of many compute nodes arrayed in a cluster. These clusters can be configured as a single host or scaled up to utilize thousands of machines depending on the workload.

What are Hadoop Components?

These are the core modules of Hadoop, which build the capabilities to conduct distributed computing capabilities.

  • Hadoop Common – The utilities that support the other Hadoop modules.
  • Hadoop Distributed File System – The distributed file system used by most Hadoop distributions . Also known by its initials, HDFS.
  • Hadoop YARN – Used to manage cluster resources and schedule jobs.
  • Hadoop Map Reduce – YARN based system of processing large amounts of data.

In addition to the core modules, there are others that provide specific and specialized capabilities to this distributed processing framework. These are just some of the tools:

  • Ambari – A web-based tool for provisioning, management, and monitoring of Hadoop clusters.
  • HBase – Distributed database that supports structured data storage.
  • Hive – Data warehouse model with data summarization and ad hoc query capability.
  • Pig – Data flow language.
  • ZooKeeper – Coordination service for distributed applications.

These are modules available from the Apache open-source project, but there are also more than thirty  companies that provide Hadoop distributions that include the open-source code as well as adding competing management solutions, processing engines, and many other features.  Some of the best known and widest used are distributed from Cloudera, MapR, and Hortonworks.

Why do we need to Virtualize Hadoop workloads?

Now, after we know about Hadoop. We always discuss about virtualization in this blog. Is hadoop suitable to be virtualized? Yes, if you would like to have these additional values for Hadoop. Then you should consider to virtualize the workload.

  • Better resource utilization:
    Collocating virtual machines containing Hadoop roles with virtual machines containing different workloads on the same set of VMware ESXi™ server hosts can balance the use of the system. This leads to lower operating expenses and lower capital expenses as you can leverage the existing infrastructure and skills in the data center and you do not have to invest in bare-metal servers for your Hadoop deployment.
  • Alternative storage options:
    Originally, Hadoop was developed with local storage in mind, and this type of storage scheme can be used with vSphere also. The shared storage that is frequently used as a basis for vSphere can also be leveraged for Hadoop workloads. This re-enforces leveraging the existing investment in storage technologies for greater efficiencies in the enterprise.
  • Isolation:
    This includes running different versions of Hadoop itself on the same cluster or running Hadoop alongside other applications, forming an elastic environment, or different Hadoop tenant. Isolation can reduce your overall security risk, ensure you are meeting your SLA’s, and support Hadoop as a service back to the lines of business.
  • Availability and fault tolerance:
    The NameNode, the Resource Manager and other Hadoop components, such as Hive Metastore and HCatalog, can be single points of failure in a system. vSphere services such as VMware vSphere High Availability (vSphere HA) and VMware vSphere Fault Tolerance (vSphere FT) can protect these components from server failure and improve availability.
  • Balance the loads:
    Resource management tools such as VMware vSphere vMotion® and VMware vSphere Distributed Resource Scheduler™ (vSphere DRS) can provide availability during planned maintenance and can be used to balance the load across the vSphere cluster.
  • Business critical applications:
    Uptime consideration is just as important in a Hadoop environment, why would the enterprise want to go back in time to a place where the servers and server components were single points of failure. Leverage the existing investment in vSphere to drive meeting SLA’s and providing an excellent service back to the business.

VMware also have the component called VMWARE BIG DATA EXTENSIONS (https://www.vmware.com/products/big-data-extensions), to rapidly deploy High Available Hadoop components and easily manage the infrastructure workloads

vSphere Big Data Extensions enables rapid deployment, management, and scalability of Hadoop in virtual and cloud environments. It also has the functionality to do scale in/out capabilities built into Big Data Extensions tools enables on-demand Hadoop instances.

Simple cloning to sophisticated end-user provisioning products such as VMware vRealize Automation™ can speed up the deployment of Hadoop. This enables IT to be a service provider back to the business and provide Hadoop as a service back to the different lines of business, providing faster time to market. This will further enable today’s IT to be a value driver vs. seen as a cost center.

For more detail about VMware Big Data Extensions, please see this datasheet from VMware Inc. = https://www.vmware.com/files/pdf/products/vsphere/VMware-vSphere-Big-Data-Extensions-Datasheet.pdf

 

Kind Regards,
Doddi Priyambodo

Installation and Documentation Guide for VMware SDDC Proof of Concept

POC Installation and Documentation generally available online both in VMware website and in different blogs, but these are some recommendations:

Google.com and VMware.com of course…

 

Kind Regards,
Doddi Priyambodo