Dokumen Terbaik untuk menjelaskan mengenai VMware NSX Design Guidelines

Berikut ini adalah dua buah dokument resmi dari VMware yang sangat detail, dapat menjelaskan mengenai hal-hal yang perlu diperhatikan untuk men-design solusi NSX:

Dapat di download resmi dari website VMware:

Jika anda ingin do NSX Hands-On, dan secara live ingin mengetahui Step by Step penggunaan-nya, maka bisa dibaca dari sini: http://docs.hol.vmware.com/catalog/  (search for “NSX”)

Hope it is useful.

 

Kind Regards,
Doddi Priyambodo

Be Sociable, Share Tweet about this on TwitterShare on LinkedInShare on FacebookEmail this to someone

Best Practice Guide untuk menjalankan JAVA di atas VMware vSphere

vSphere saat ini sudah sangat bisa diandalkan untuk dapat menjalankan berbagai macam Business Critical Applications, dari berbagai macam programming language seperti Java, .NET, dan lain-lain. Database system dengan load yang tinggi seperti Billing, Analytics, dan lain-lain juga dapat didukung dengan sangat baik di vSphere baik menggunakan Oracle Database, SQL Server, dan lain-lain. Selain dapat memberikan performance yang baik, benefits terbesarnya yaitu mekanisme High Availability, dan mekanisme Operational serta Management yang lebih advanced untuk monitor kesehatan dari aplikasi ini. Best practice guide khusus untuk Java ada beberapa hal yang perlu diperhatikan secara umum, diantaranya adalah penggunaan memory pada Java Virtual Machine.

  • Sizing ukuran dari Memory yang ada di Virtual Machine untuk mencukupi konfigurasi dari Java Heap, dan memory yang dibuhkan oleh code dari JVM, serta beberapa memory lain yang sedan diproses oleh Guest Operating System tersebut.
  • Set  ukuran dari Memory Reservation di Virtual Machine tersebut sebesar memory yang dibutuhkan sesuai perhitungan diatas, atau set keseluruhan reservation dari size virtual machine tersebut (selama melebihi dari point diatas tadi). Ini disebabkan jika terjadi memory swapping, maka performance JVM heap akan turun terutama pada proses Garbage Collection.
  • Tentukan jumlah yang optimal dari virtual CPU pada virtual machine tersebut  dengan melakukan pengetesan dengan beberapa konfigurasi vCPU menggunakan load yang sama.
  • Jika menggunakan beberapa threads dalam proses Garbage Collector di JVM, maka pastikan bahwa jumlah thread tersebut adalah sejumlah besaran virtual CPU yang dikonfigurasikan di virtual machine.
  • Untuk mempermudah monitoring dan maintenance, sebaiknya gunakan satu buah JVM process per- virtual machine.
  • Selalu nyalakan Balloon Driver, karena jika terjadi overcommitment maka virtual machine dapat mengatur memory-nya dengan mekanisme ini.

Secara summary, tuntunan best practice guide ini dapat didownload dari link ini:

PS:
– Pada posting saya sebelumnya, saya sempat mengulas mengenai Best Practice untuk menjalankan Oracle Database diatas vSphere. >> http://bicarait.com/?s=oracle+database

Be Sociable, Share Tweet about this on TwitterShare on LinkedInShare on FacebookEmail this to someone

Tutorial Instalasi Server DNS, NTP, NFS, iSCSI, VMware vSphere 6.5 (ESXi dan vCenter) di VMware Fusion for Mac atau VMware Workstation for Windows/Linux untuk Virtualization Home Lab.

Melanjutkan posting sebelumnya, kali ini saya akan menulis dalam bahasa Indonesia. Karena saya rasa sudah banyak yang menulis artikel seperti ini dalam bahasa Inggris, tapi jarang dalam bahasa Indonesia. Merayakan launching terbaru dari vSphere 6.5, maka kali ini saya akan melakukan instalasi dan konfigurasi dari komponen tersebut di VMware Fusion MacBook Pro milik saya. Tulisan ini sebenarnya untuk melengkapi tulisan dari posting2 saya sebelumnya mengenai fasilitas Home Lab yang saya miliki untuk bereksplorasi mengenai VMware teknologi yang saya miliki:

Let’s go straight to the point. Berikut ini adalah langkah2nya untuk menyiapkan VMware Home Lab di laptop MacBook Pro menggunakan Nested Virtualization dengan VMware Fusion untuk teknologi virtualisasinya. (bisa juga digunakan untuk melakukan instalasi di VMware Workstation pada Windows OS atau Linux OS, atau bahkan juga kalau mau di-install diatas vSphere ESXi).

  1. Siapkan DNS Server, NTP Server (saya menggunakan CentOS)
  2. Siapkan LDAP Server atau Active Directory (saya menggunakan OpenLDAP) – optional
  3. Siapkan centralized storage server (saya menggunakan NFS dan iSCSI dari OpenFiler)
  4. Instalasi ESXi 6.5 – yuk kita langsung rasakan fitur2 terbarunya!
  5. Install and Configure vCenter 6.5 dengan embedded PSC (saya akan melakukan instalasi di Fusion, bukan di ESXi)
  6. (other post) Instalasi dan Konfigurasi vSphere Integrated Container 1.0 & Harbor – I already did this earlier, please read my previous post in [here]
  7. (other post) Instalasi dan Konfigurasi vRealize Operations 6.4
  8. (other post) Instalasi dan Konfigurasi Log Insight 4.0
  9. (other post) Instalasi dan Konfigurasi vRealize Automation 7.2
  10. (other post) Instalasi dan Konfigurasi NSX and vRealize Network Insight! – currently not supported for vSphere version 6.5
  11. (other post) Install and Configure vCloud Director for SP 8.10 and vSphere Integrated Openstack 3 – currently not supported for vSphere version 6.5

Berikut ini adalah CPU and Memory Dimensioning, IP address & Credentials yang akan digunakan: (mohon diingat bahwa dimensioning ini tanya saya gunakan di home-lab, jika ingin di-deploy di production maka lakukan dimensioning cpu, memory, dan storage dengan lebih proper)

screen-shot-2016-12-31-at-02-28-47

Okay, let’s do this tutorial step by step.

1. Instalasi dan Konfigurasi DNS Server, NTP Server, dan OpenLDAP

Kali ini kita akan melakukan tutorial  langkah demi langkah untuk membuat DNS server menggunakan paket BIND di CentOS 7, lalu dilanjutkan menggunakan NTP Daemon untuk NTP server. CentOS 7 yang saya gunakan adalah versi linux dengan paket yang minimalist, karena hanya digunakan sebagai server pendukung saja yaitu untuk DNS dan NTP. Karena vSphere dan komponen VMware lainnya seperti NSX sangat bergantung pada service DNS dan NTP, serta Active Directory (OpenLDAP) as an optional requirement. Ikuti langkahnya sebagai berikut.

Download dan Deploy latest version of CentOS minimal package (636 MB, CentOS-7-x86_64-Minimal-1503-01.iso)

Install BIND as DNS server

Check the hostname configuration in your DNS machine

# hostnamectl status
# hostnamectl set-hostname domain01.lab.bicarait.com

Update the repository in your linux OS lalu install paket BIND

# yum update -y
# yum install bind

Buka dengan  editor, and change the configuration file

# vim /etc/named.conf
options {
listen-on port 53 { any; };
directory “/var/named”;
dump-file “/var/named/data/cache_dump.db”;
statistics-file “/var/named/data/named_stats.txt”;
memstatistics-file “/var/named/data/named_mem_stats.txt”;
allow-query { any; };
recursion yes;
dnssec-enable no;
dnssec-validation no;
}

Tambahkan baris ini di file konfigurasi BIND /etc/named.conf tersebut

zone "lab.bicarait.com" IN {
type master;
file "forward.lab.bicarait";
allow-update { none; };
};

zone "159.16.172.in-addr.arpa" IN {
type master;
file "reverse.lab.bicarait";
allow-update { none; };
};
  • Buat file baru untuk file forward dari konfigurasi DNS kita.
# vim /var/named/forward.lab.bicarait.com
$TTL 604800
@   IN  SOA     domain01.lab.bicarait.com. root.lab.bicarait.com. (
2011071001  ; Serial
3600        ; Refresh
1800        ; Retry
604800      ; Expire
86400       ; Minimum TTL
)
@       IN  NS          domain01.lab.bicarait.com.
@       IN  A           172.16.159.2
domain01        IN      A       172.16.159.2
nas01           IN      A       172.16.159.3
ldap01          IN      A       172.16.159.4
esxi01          IN      A       172.16.159.11
esxi02          IN      A       172.16.159.12
esxi03          IN      A       172.16.159.13
vc01            IN      A       172.16.159.21
vrops01         IN      A       172.16.159.31
vrlog01         IN      A       172.16.159.32
  • Buat file baru untuk file reverse dari konfigurasi DNS kita.
# vim /var/named/reverse.lab.bicarait.com
$TTL 86400
@   IN  SOA     domain01.lab.bicarait.com. root.lab.bicarait.com. (
2011071001  ;Serial
3600        ;Refresh
1800        ;Retry
604800      ;Expire
86400       ;Minimum TTL
)
@       IN      NS      domain01.lab.bicarait.com.
@       IN      PTR     lab.bicarait.com.
domain01        IN      A       172.16.159.2
nas01           IN      A       172.16.159.3
esxi01          IN      A       172.16.159.11
esxi02          IN      A       172.16.159.12
esxi03          IN      A       172.16.159.13
vc01            IN      A       172.16.159.21
vrops01         IN      A       172.16.159.31
vrlog01         IN      A       172.16.159.32
vrni01          IN      A       172.16.159.33
2       IN      PTR     domain01.lab.bicarait.com.
3       IN      PTR     nas01.lab.bicarait.com.
11      IN      PTR     esxi01.lab.bicarait.com.
12      IN      PTR     esxi02.lab.bicarait.com.
13      IN      PTR     esxi03.lab.bicarait.com.
21      IN      PTR     vc01.lab.bicarait.com.
31      IN      PTR     vrops01.lab.bicarait.com.
32      IN      PTR     vrlog01.lab.bicarait.com.
33      IN      PTR     vrni01.lab.bicarait.com.
  • Check to verify the configuration
# named-checkconf /etc/named.conf
# named-checkzone /var/named/forward.lab.bicarait.com
# named-checkzone /var/named/reverse.lab.bicarait.com
  •  Nyalakan service dari DNS BIND
# systemctl enable named
# systemctl start named
# systemctl status named
  •  Ijinkan DNS port 53 in the system
# firewall-cmd --permanent --add-service=dns
# firewall-cmd --permanent --add-port=53/tcp
# firewall-cmd --permanent --add-port=53/udp
# firewall-cmd --reload
  •  Lakukan perubahan untuk permission di file
# chgrp named -R /var/named
# chown -v root:named /etc/named.conf
# restorecon -rv /var/named
# restorecon /etc/named.conf
  •  Check file konfigurasi dari client dengan cara rubah dulu file /etc/resolv.conf dan tambahkan parameter nameserver untuk menuju IP DNS yang baru saja kita konfigurasi diatas. Setelah itu lakukan perintah # dig atau # nslookup.
# nslookup domain01.lab.bicarait.com 172.16.159.2

 

2. Instalasi dan Konfigurasi NTPD as NTP server

  • Install paket NTP daemon
# yum install ntp
  •  Rubah file konfigurasi dari NTP
# vim /etc/ntp.conf
driftfile /var/lib/ntp/drift
restrict 172.16.159.0 mask 255.255.255.0 nomodify notrap
server 1.id.pool.ntp.org iburst
server 3.asia.pool.ntp.org iburst
server 0.asia.pool.ntp.org iburst
includefile /etc/ntp/crypto/pw
keys /etc/ntp/keys
disable monitor
logfile /var/log/ntp.log
  • Jalankan perintah ini untuk setting permission dan automation-nya, serta pengetesan
# firewall-cmd --add-service=ntp --permanent
# firewall-cmd --reload
# systemctl start ntpd
# systemctl enable ntpd
# systemctl status ntpd
# ntpq -p atau # date -R atau # ntpdate namaserver

 

3. Siapkan LDAP Server sebagai Single Sign On database User Directories anda

Lakukan instalasi untuk OpenLDAP, kali ini saya menggunakan TurnKey LDAP. (optional)

  • Download dan Deploy TurnKey LDAP OVA package.
  • Masukkan password untuk root dan openldap user.
  • Masukkan domain dari LDAP = lab.bicarait.com
  • Configure IP address, Gateway, dan DNS

screen-shot-2016-12-29-at-20-23-07

  • Buka https://172.16.159.4/ lalu masukkan user login cn=admin,dc=lab,dc=bicarait,dc=com
  • Tambahkan beberapa user untuk active directory organisasi anda

screen-shot-2016-12-29-at-20-28-09

4. Siapkan NFS Server dan iSCSI Server untuk Shared Storage

Untuk dapat menggunakan features VMware vMotion, DRS, High Availability maka kita perlu memiliki shared storage yang dapat diakses oleh semua ESXi servers. Berikut ini adalah tutorial untuk melakukan instalasi dan konfigurasi NFS dan iSCSI sebagai Shared Storage. Parameter yang saya gunakan disini adalah sesuai dengan environment home lab yang saya miliki.

  • Download dan deploy latest version of Openfiler ke Fusion
  • Setelah selesai, buka https://172.16.159.3:446 login dengan user default yaitu openfiler/password (jika belum diganti)
  • Enable service untuk NFS Server dan iSCSI Target di Openfiler: Menu – Services

screen-shot-2016-12-23-at-16-25-59

  • Ijinkan akses untuk sistem ini dari mana saja: Menu – Systems – Network Access Configuration

screen-shot-2016-12-23-at-16-36-24

  • Tambahkan virtual harddisk di Fusion untuk tempat penyimpanan NFS atau iSCSI.
  • Tambah New Harddisk dari Fusion, tambahkan 1 harddisk untuk NFS dan 1 harddisk untuk iSCSI
  • Di menu Openfiler, masuk ke: Menu – Volumes – Block Devices – click hyperlink dari /dev/sdb dan /dev/sdc

screen-shot-2016-12-23-at-16-58-09

Lanjutkan enable konfigurasi untuk NFS file system:

  • Klik menu volumes-volume group, masukkan nama volume (nfs) lalu centang /dev/sdb1.
    Then klik tombol add volume group.

screen-shot-2016-12-23-at-17-16-39

  • Klik menu add volume, pilih volume nfs. lalu isi deskripsi volume name, size, dan file system.

screen-shot-2016-12-29-at-15-50-39

  • Klik menu shares di atas, lalu klik folder terakhir dan masukkan nama subfolder.

screen-shot-2016-12-29-at-15-51-52

  • Klik folder yang baru dibuat tadi, lalu klik tombol Make Share. Berhubung ini adalah home-lab, kita bisa set public access. Lalu pilih RW untuk host akses buat NFS.

screen-shot-2016-12-29-at-15-56-36

  • Selanjutnya NFS ini dapat diakses dari ESXi. Dengan alamat sebagai berikut:
    IP=172.16.159.3, Folder=/mnt/nfs/vol1/data

Lanjutkan konfigurasi untuk iSCSI file system:

  • Klik menu Volumes-Volume Group. Masukkan nama volume (iscsi) lalu centang /dev/sdc1. Lalu klik tombol add volume group.

screen-shot-2016-12-29-at-16-01-59

  • Pilih menu add volume di menu kanan, lalu pilih combobox iscsi dan klik tombol change. Isi deskripsi volume name, size, dan file system pilih untuk Block version

screen-shot-2016-12-29-at-16-04-59

  • Klik link add iscsi target di menu kanan. hanya ada satu pilihan tersedia, lalu pilih add.

screen-shot-2016-12-29-at-16-06-34

  • Klik menu LUN mapping, lalu klik tombol Map

screen-shot-2016-12-29-at-16-13-53

  • Klik menu Natwork ACL lalu klik tombol Allow untuk mengijinkan akses

screen-shot-2016-12-29-at-16-14-24

  • Selanjutnya konfigurasi iSCSI adapter dari ESXi dari menu Configuration-tab – Storage Adapters – Add. Pilih Target di iSCSI Software Adapter lalu masukkan IP=172.16.159.3 dan default port 3260

 

5. Siapkan vSphere ESXi Server sebagai Nested Virtualization diatas Mac Fusion

  • Cara instalasi ESXi diatas Fusion ini sama persis dengan cara instalasi diatas x86 Servers
  • Siapkan ESXi, saya menggunakan latest version yaitu versi 6.5.
  • Tutorial untuk melakukan instalasi ini ada banyak sekali material yang beredar di internet, silahkan dicari di http://kb.vmware.com
  • Installer-nya silahkan didownload di http://my.vmware.com.

screen-shot-2016-12-29-at-16-52-47

  • Saya akan skip penjelasan untuk tutorial instalasi ini. Langsung akan masuk ke bagian konfigurasi saja dari vCenter. Saya hanya akan masukkan beberapa screenshots saja disini untuk hasil instalasinya.

screen-shot-2016-12-29-at-17-23-06

  • Ada hal baru di versi 6.5 ini, yaitu kita bisa mengakses langsung ESXi tanpa melalui vSphere Client C# desktop supertitles versi sebelumnya. Tapi bisa diakses langsung dari URL web page.

screen-shot-2016-12-31-at-02-43-40

 

6. Siapkan vSphere vCenter Server sebagai Centralized Management 

Kali ini saya akan melakukan instalasi di vCenter di Fusion bukan langsung di ESXi, jika ingin melakukan instalasi di atas ESXi maka lakukan sesuai guidance yang ada di http://kb.vmware.com (silahkan dicari di google, cukup simple kok). Anyway, untuk dapat melakukan instalasi vCenter di Fusion, maka ada beberapa hal yang perlu dilakukan/di tweak secara manual jadi agak sedikit berbeda jika instalasi langsung dilakukan diatas ESXi. Begini step by step-nya:

  • Download dan extract file ISO (VMware-VCSA-all-6.5.0-4602587.iso) dari vCenter 6.5 di MacBook. Silahkan download dari http://my.vmware.com
  • Import file di dalam directory vcsa/ yaitu vmware-vcenter-server-appliance-xxxxx.ova ke Fusion. Tapi jangan klik Finish dulu setelah selesai instalasinya agar Virtual Machine-nya tidak menyala.
  • Rubah file *.vmx di dalam Folder virtual machine hasil deployment tadi. Tambahkan baris ini:
guestinfo.cis.deployment.node.type = "embedded"
guestinfo.cis.vmdir.domain-name = "vsphere.local"
guestinfo.cis.vmdir.site-name = "Default-First-Site"
guestinfo.cis.vmdir.password = "VMware1!"
guestinfo.cis.appliance.net.addr.family = "ipv4"
guestinfo.cis.appliance.net.addr = "172.16.159.21"
guestinfo.cis.appliance.net.pnid = "172.16.159.21"
guestinfo.cis.appliance.net.prefix = "24"
guestinfo.cis.appliance.net.mode = "static"
guestinfo.cis.appliance.net.dns.servers = "172.16.159.2"
guestinfo.cis.appliance.net.gateway = "172.16.159.1"
guestinfo.cis.appliance.root.passwd = "VMware1!"
guestinfo.cis.appliance.ssh.enabled = "true"
hard-disk.hostBuffer = "disabled"
prefvmx.minVmMemPct = 25

Notes: Perhatikan jangan sampai tanda ini ” berubah menjadi ini “ – karena ini akan menyebabkan error “Dictionary Problem” ketika VM akan dinyalakan (saya sempat mengalami ini).

  • Okay, sekarang klik Finish dan VM akan menyala. Anda akan disambut oleh logo Linux Photon sebagai based dari VMware appliance ini. Rubah IP address bisa dilakukan lagi jika diinginkan di bagian Customize System (Klik F2) di menu DCUI.
  • Lanjutkan konfigurasi vCenter dengan membuka halaman https://172.16.159.21:5480

screen-shot-2016-12-29-at-21-46-37

  • Oiya sebelumnya, lakukan pengecekan DNS terlebih dahulu untuk memastikan bahwa record telah tersimpan dan bisa di-resolve oleh vCenter. Masuk via SSH ke vCenter, lakukan nslookup checking ke DNS server.
# ssh-keygen -R 172.16.159.21
# ssh root@172.16.159.21

# nslookup vc01.lab.bicarait.com 172.16.159.2
# nslookup 172.16.159.21 172.16.159.2

Berikut ini adalah beberapa screenshots untuk konfigurasi vCenter:

  • Summary Installation vCenter:

screen-shot-2016-12-29-at-21-49-29

Berikut ini adalah beberapa hasil setelah kita melakukan konfigurasi dari vCenter:

  • Appliance Administration Web Page.

screen-shot-2016-12-31-at-00-46-04

  • vCenter Web Page

screen-shot-2016-12-31-at-00-44-43

Let’s continue postingan ini di lain waktu untuk memasukkan ESXi ke vCenter dan melakukan konfigurasi shared storage yang sudah kita buat tadi di ESXi. Lalu melakukan konfigurasi untuk virtual machines, dan lain-lain. Dan tentunya bagaimana melakukan design yang proper untuk instalasi dan konfigurasi vSphere di production. Karena men-design di Home Lab sangat jauh berbeda dengan cara kita men-design di production environment! (ex: cluster design, HA design, security design, performance design, etc)

 

Kind Regards,
Doddi Priyambodo

Be Sociable, Share Tweet about this on TwitterShare on LinkedInShare on FacebookEmail this to someone

Mengenal release terbaru dari VMware vSphere versi 6.5

Pada bulan October 2016 kemarin VMware mengenalkan vSphere seri terbaru yaitu versi 6.5. Pada tanggal 16 November 2016 software tersebut sudah bisa di-download publicly. Well, seperti biasa banyak sekali enhancements yang dilakukan pada software virtualisasi ini pada setiap versi terbarunya yang sangat sulit dikejar oleh competitors. Beberapa diantaranya adalah:

  1. Sangat mudah dan simple untuk digunakan (ex: enhancement dari vCenter)
  2. Fitur security yang “Built-In” langsung dari vSphere (ex: fitur baru VM & vMotion Encryption)
  3. Platform aplikasi yang universal (ex: optimise untuk vSphere Integrated Container)
  4. Operasi yang jauh lebih reliable (ex: enhancement dari HA, DRS, vROPS)

Para posting selanjutnya saya akan drill down lebih mendalam dan screenshots dari tampilan versi terbaru ini langsung dari my personal lab environment.

Kind Regards,
Doddi Priyambodo

Be Sociable, Share Tweet about this on TwitterShare on LinkedInShare on FacebookEmail this to someone

Oracle Database Standard Edition 2 Compare to other Editions

If you read this blog, I also have specific part talking about Oracle Database. Several years ago actually I was also an Oracle Database Administrator for Oracle 9i, 10gR2, and 11gR2 doing operational such as architecture design, deployment, performance tuning, backup, replication, clustering, and PL/SQL programming. But, currently I found cloud technology is more interesting than on-premise database technology. So, that’s one of the reason why I move my focus to Cloud Technology (read, VMware). Anyway, now the current version of Oracle Database available is 12.1.0.2 (12cR1).

In this post I would like to elaborate more regarding new licensing scheme from Oracle since 12.1.0.2 version came. The introduction of Oracle Standard Edition 2 version. This is a brief explanation from Oracle’s license document:

Oracle Database Standard Edition 2 may only be licensed on servers that have a maximum capacity of 2 sockets. When used with Oracle Real Application Clusters, Oracle Database Standard Edition 2 may only be licensed on a maximum of 2 one-socket servers. In addition, notwithstanding any provision in Your Oracle license agreement to the contrary, each Oracle Database Standard Edition 2 database may use a maximum of 16 CPU threads at any time. When used with Oracle Real Application Clusters, each Oracle Database Standard Edition 2 database may use a maximum of 8 CPU threads per instance at any time. The minimums when licensing by Named User Plus (NUP) metric are 10 NUP licenses per server.

These are some notes for the customer after reading the statement above, and other notes:

  • Oracle Database Standard Edition 2 (SE2) will replace SE and SE1 from version 12.1.0.2
  • SE2 will have a limitation of maximum 2 socket systems and a total of 16 CPU threads*
    • *note not cores!
    • SE2 is hard coded in Resource Manager to use no more than 16 CPU threads.
  • RAC is till included with SE2 but is restricted to 2 sockets across the cluster. Therefore, each server must be single socket.
  • SE One and SE will no longer be available to purchase from 10th November 2015.
  • If you need to purchase additional DB SE and SE One Licenses you must purchase SE2 instead and install the version of 11g as required from here. Note – you must still comply with the license rules for SE2.
  • Oracle is offering a FREE license migration from SE One* and SE to SE2.
    • *SE One customers will have to pay a 20% increase in support as part of the migration.
    • SE customers face no other cost increases for license or support, subject to Named User minimums being met.
  • Named user minimums for SE2 are now 10 per server
  • 12.1.0.1 was the last SE and SE1 release
  • 12.1.0.1 SE and SE1 customers will have 6 months of patching support once SE2 12.1.0.2 is released with quarterly patches still being available in Oct 2015 and Jan 2016.

Now, compare to other versions. These are the features that is available in SE2 compare to Oracle Database Enterprise Edition:

Continue reading Oracle Database Standard Edition 2 Compare to other Editions

Be Sociable, Share Tweet about this on TwitterShare on LinkedInShare on FacebookEmail this to someone

Download VMware Products Datasheet (Bundle and per-Item)

Initially, I don’t know why I am posting this article because this will make some redundancies to other contents in the internet. Hmmm, well maybe because some customers always ask me about the data sheets of VMware products, then I think it will be easier if I just tell them about this post rather than they google it and download them one by one.

VMware Bundle Components Datasheet:

– VMware vCloud Suite Datasheet : (Download Here)
– VMware vRealize Suite Datasheet : (Download Here)
– VMware vCloud NFV : (Download Here)

VMware per-product Components Datasheet:

– VMware vSphere : (Download Here)
– VMware vCenter : (Download Here)
– VMware vCloud Director for SP : (Download Here)
– VMware vRealize Automation : (Download Here)
– VMware vRealize Operations : (Download Here)
– VMware vRealize Business for Cloud  : (Download Here)
– VMware Site Recovery Manager : (Download Here)
– VMware NSX : (Download Here)
– VMware vSAN: (Download Here)

Notes: there are still other offers from VMware such as Cloud Foundation, vSphere Integrated Containers, vRealize Code Stream, vSphere Integrated Openstack, vRealize Log Insight, vRealize Network Insight, Workspace One, Horizon, Airwatch, etc (… please refer to http://www.vmware.com for more detail).

Conclusion:

After reading this post, now maybe some of you just know that VMware is not just vSphere ESXi + vCenter right? 🙂

Yeah, it’s the Software-Defined Data Center

VMware, a global leader in cloud infrastructure and business mobility, accelerates our customers’ digital transformation journey by enabling enterprises to master a software-defined approach to business and IT. With VMware solutions, organizations are improving business agility by modernizing data centers, driving innovation with modern data and apps, creating exceptional experiences by mobilizing everything, and safeguarding customer trust with a defense-in-depth approach to cybersecurity.

 

Kind Regards,
Doddi Priyambodo

Be Sociable, Share Tweet about this on TwitterShare on LinkedInShare on FacebookEmail this to someone

Explanation about How CPU Limit and CPU Reservation can Slow your VM (if you don’t do a proper sizing and analysis)

In this post, I would like to share about CPU limit and CPU reservation configuration in vSphere ESXi virtualisation technology.

Actually those features are great (since the configuration also available in vCloud Director (*it will call the configuration in vCenter)). Those features are great if you really know and already consider on how to use it properly. For example, if you would like to use CPU reservation please make sure that you are not running those VMs in a fully contention/overcommitment environment. For CPU limit, if you have application that is always consume 100% of CPU even though you always give more CPU to the VM – then you can use Limit configuration to limit the usage of the CPU by that application (but, for me the Best Way is ask your Developer to Fix the Application!).

Okay, let’s talk more about CPU Limit.

Duncan Epping and Frank Denneman (both are the most respectable VMware blogger), once said that: “Look at a vCPU limit as a restriction within a specific time frame. When a time frame consists of 2000 units and a limit has been applied of 300 units it will take a full pass, so 300 “active” + 1700 units of waiting before it is scheduled again.”

So, applying a limit on a vCPU will slow your VM down no matter what. Even if there are no other VMs running on that 4 socket quad core host.

Next, let’s talk more about CPU Reservation.

Josh Odgers (another virtualisation blogger) also explained that CPU reservation “reserves” CPU resources measured in Mhz, but this has nothing to do with the CPU scheduler. So setting a reservation will help improve performance for the VM you set it on, but will not “solve” CPU ready issues caused by “oversized” VMs, or by too high an overcommitment ratio of CPU resources.

The configuration of Limit and Reservation are done outside the Guest OS, so your Operating System (Windows/Linux/etc) or your Application (Java/.NET/C/etc) do not know that. Your application will ask the resource based on the allocated CPU to that VM.
You should minimize the use of Limit and Reservation as it makes the operation more complex.

Conclusion:

Better use the feature of default VMkernel which already got a great scheduler functionality that will take fairness into account. Actually, you can use CPU share configuration if you want to prioritise the VM other than others.

But, the most important thing is: “Please Bro…, Right Size Your VM!”

 

Kind Regards,
Doddi Priyambodo

 

Be Sociable, Share Tweet about this on TwitterShare on LinkedInShare on FacebookEmail this to someone

Docker Version Manager (DVM) to easily Move between Docker Client Version

Another break-time post from the continuous tutorial about cloud native applications 🙂

Sometimes when we are working in container environment, we found server’s version is not the same as client’s version. So we can not connect to the server. To easily solve this issue, we should install dvm (docker version manager) so we can easily move from one environment in our client to another.

These are the steps:

$ curl -sL https://download.getcarina.com/dvm/latest/install.sh | sh

$ source /Users/doddipriyambodo/.dvm/dvm.sh
#Usages of the commands:

$ dvm ls --> see the version in your client
$ dvm ls-remote --> see what version available to install
$ dmv install 1.12.3 --> install the client
$ dvm use 1.12.3 --> use the specified client
$ dvm deactivate --> uninstall the client

 

Kind Regards,
Doddi Priyambodo

Be Sociable, Share Tweet about this on TwitterShare on LinkedInShare on FacebookEmail this to someone

Can Not Connect / Error Connecting to iSCSI SAN

Sorry to disturb the tutorial about cloud native application, just a quick note about the troubleshooting.

I found an issue today regarding my iSCSI connection to the datastore. All hosts are all having this error when trying to connect to the SAN. This is because I played with my Lab a lot! and tried to remove and add the NIC of my Fusion and also my Host.

Error messages looks something like this:

Call "IscsiManager.QueryBoundVnics" for object "iscsiManager" on ESXi / vCenter failed.

The problem is solved with the following:

1. Disabled the iSCSI software adapter (backup your iqn and settings)
2. Navigate to /etc/vmware/vmkiscsid/ of the host and backup the files
3. Delete the contents in /etc/vmware/vmkiscsid/
4. Reboot the host
5. Create a new software iscsi adapter, write the IQN with the old one we backup earlier
6. Add iscsi port bindings and targets.
7. DONE.

 

Kind Regards,
Doddi Priyambodo

Be Sociable, Share Tweet about this on TwitterShare on LinkedInShare on FacebookEmail this to someone

Running your Docker in Production Environment using VMware vSphere Integrated Containers – (Part 2)

Following our tutorial, now we will continue to do the installation and configuration for those components.

So, rephrasing previous blog post. By utilising vSphere Integrated Containers, now Developers can use their docker commands to manage the development environments, also functionalities are enriched with specific container management portal (VMware Admiral) and enterprise features container registry (VMware Harbor). System administrator can still use their favourite management tool to manage the infrastructure, such as vCenter and also vRealize Operations plus Log Insight to manage the virtual infrastructure in a whole holistic view. Shown in the diagram below:

A traditional container environment use the host/server to handle several containers. Docker has the ability to import images into the host, but the resource is tied to that host. The challenge is sometime that host has a very limited set of resources. To expand resource on that host, then we need to shutdown the host and then the containers. Then we need to add resource for that physical/virtual machine before more containers can be powered deployed. Another challenge is the container is not portable as it can not be moved to another host since it is very tight to the OS kernel of the container host.

Another concerns other than resources, already explained in my earlier post regarding some enterprise features if we would like to run docker in production environment such as security, manageability, availability, diagnosis and monitoring, high availability, disaster recovery, etc. VIC (vSphere Integrated Containers) can give the solution for all those concerns by using resource pool as the container host and virtual machines as the containers. Plus with new features of vSphere 6 about Instant Clone now VIC can deliver “instant on” container experience alongside the security, portability, and isolation of Virtual Machine. Adding extra hosts in the resource pool to dynamically increase infra resources, initiate live migration/vMotion, auto placement/Distributed Resource Scheduler, dedicated placement/affinity, self healing/High Availability, QoS/weight, quota/limit, guarantee/reservation, etc will add a lot of benefits to the docker environment.

So, these are our steps to prepare the environments for vSphere Integrated Containers (VIC).

  1. Installation and configuration of vSphere Integrated Containers
  2. Installation and configuration of Harbor
  3. Installation and configuration of Admiral

So, let’s start the tutorial now.

Checking the Virtual Infrastructure Environments

  • I am running my virtualisation infrastructure in my Mac laptop using VMware Fusion Professional 8.5.1.
  • Currently I am using vSphere ESXi Enterprise Plus version 6 update 2, and vCenter Standard version 6 update 2.
  • I have NFS storage as my centralised storage, NTP, DNS and DHCP also configured in another VM.

    screen-shot-2016-11-03-at-15-32-42
    screen-shot-2016-11-04-at-15-11-52

Installation of vSphere Integrated Containers (VIC)

There are two approach to install VIC. This is the first one: (I use this to install on my laptop)

  1. Download the installation source from github = https://github.com/vmware/vic
  2. You will download the vic from the pull command using git. First install the git components from here = https://git-scm.com/downloads
  3. Run this command = $ git clone https://github.com/vmware/vicscreen-shot-2016-11-03-at-18-17-01
  4. After downloaded go to the directory = $ cd vic
  5. Now, build the binaries using this command =
    docker run -v $(pwd):/go/src/github.com/vmware/vic -w /go/src/github.com/vmware/vic golang make all
     screen-shot-2016-11-03-at-18-42-34

OR, you can do the second approach: (I use this to install on my VM)

  1. Download binary file from here = https://bintray.com/vmware/vic-repo/build
  2. In this personal lab, I am using this binary = https://bintray.com/vmware/vic-repo/build/6511#files
  3. Download that binary to the Virtual Machine that you will be used for VIC Management Host.
  4. Extract the file using = $ tar -zxvf vic_6511.tar.gz.  NOTE:You will see the latest build as shown here. The build number “6511” will be different as this is an active project and new builds are uploaded constantly.

Okay, you already installed the installer now. In those steps above, there are three primary components generated by a full build, found in the ./bin directory by defaul). The make targets used are the following:

  1. vic-machine – make vic-machine
  2. appliance.iso – make appliance
  3. bootstrap.iso – make bootstrap

Okay, after this we will Deploy our Virtual Container Host in VMware environments (I am using vCenter with ESXi as explained earlier). The installation can run on dedicated ESXi host too (without vCenter) if needed.


Now, continue to create the Virtual Container Host in the vCenter. Since I am using Mac, I will use command prompt for mac.

$ ./vic-machine-darwin create --target 172.16.159.150/dc1.lab.bicarait.com --compute-resource cls01.dc01.lab.bicarait.com --user administrator@vsphere.local --password VMware1! --image-store ds_fusion_01 --no-tlsverify --name virtualcontainerhost01 --bridge-network dvPgContainer01 --force

screen-shot-2016-11-06-at-21-37-13

After that command above, let’s check the condition of our virtual infrastructure from vCenter now. Currently we will see that we have a new resource pool as the virtual container host, and a vm as an endpoint vm as a target of the container host.

screen-shot-2016-11-06-at-21-45-38


Okay, installation is completed. Let’s try to deploy a docker machine into the VIC now.

docker -H 172.16.159.153:2376 --tls info

screen-shot-2016-11-06-at-22-24-09

After that, let’s do the pull and run command for the docker as normal operation same as my previous posts.
$ docker -H 172.16.159.153:2376 --tls \
--tlscert='./docker-appliance-cert.pem' \
--tlskey='./docker-appliance-key.pem' pull vmwarecna/nginx

$ docker -H 172.16.159.153:2376 --tls \
--tlscert='./docker-appliance-cert.pem' \
--tlskey='./docker-appliance-key.pem' run -d -p 80:80 vmwarecna/nginx

Note: for production, we must use the *.pem key to connect to the environment. Since this is my development environment, so I will skip that.

 

Okay, now finally… this is a video to explain the operational of vSphere Integrated Container, VMware Admiral, and VMware Harbor (I already explained about Admiral and Harbor in my previous blog post in here):

 

Kind Regards,
Doddi Priyambodo

 

Be Sociable, Share Tweet about this on TwitterShare on LinkedInShare on FacebookEmail this to someone