AWS Hands-on-Lab Workshops for Builders

At AWS, we consider all are builders. The innovators, the collaborators, the creators. The ones who see what doesn’t exist, and then make it exist. We believe nothing should stand in the builder’s way, and dreams never have to turn off. With AWS, it’s time to build on.. Because we are aiming to build something better for the world. In this post, for you developers who would like to make your hands dirty, you can try these hands on lab which will create a sample application based on the technology that you like. Please explore and … GO BUILD!

General resources:

AWS web resources:

Huge resources:

Private resources: (can only be accessed if you have the credential)


While talkers Talk, builders Build!


Kind Regards,
Doddi Priyambodo

What is Cloud?

Definition of Cloud?

I usually answer this question with this mechanism (from my perspective)
You can ask “What”, “Where”, When”, “How”, “Why” to answer that question.

  1. What? >> it is collection of IT resources (such as compute, storage, artificial intelligence, function, framework, applications, etc)
  2. Where? >> it can be accessed from the internet, so literally anywhere around the earth
  3. When? >> it can be access anytime you want, no time or schedule limitation
  4. How? >> you can use it per-usage based. you want it, you get it. pay as used.

Why Cloud? (I will elaborate this more later on, but it is because of…)

  1. Agility
  2. Utility based cost
  3. Elasticity
  4. Breadth of Services
  5. Go Global in minutes


Kind Regards,
Doddi Priyambodo

These are top 16 Common/Killer Use Cases from VMware NSX for You!

Whenever I pitch about NSX to customer, I always start with the use cases. On-target questioning and in-depth listening to customer’s pain points are important, so we can collaborate together to solve their issues and going beyond that to enhance their innovations.

For me, it is not relevant right now to describe the byte per-byte features and bit per-bit capabilities for first/second meeting. We can go with those approach “later on” of course if customer would like to know in-depth, or we are in the stage of proofing the technology as long as we already understand their goal and pain points.

But, to understand customer’s expectation and give them “BEYOND” than their expectation is always been our goal when doing a professional consultation with them.

Anyway, these are some common use cases that we can use to do collaborative discussion with customers that we can put a “laser focus” later on. There are around ~16 use cases that VMware NSX (Network Virtualization) can bring new benefits or additional capabilities to customers and make their life simpler.

  1. Security Use Cases
    1. Network Segmentation
    2. Microsegmentation for Securing VDI Infraastructure
    3. Intelligent Grouping for Unsupported Operating System
    4. Automated Security in a Software Defined Data Center
    5. Advanced Security (IDS/IPS) Insertion (ex: Palo Alto Network NGFW)
    6. Collapsed DMZ
    7. Integrate Dev, Test, and Prod environment into single infrastructure
    8. Securing access to and from Jump Box servers
  2. Application Continuity Use Cases
    1. Multisite Networking and Security
    2. Data Center Consolidation/Migration (Merger & Acquisition)
    3. Hybrid/Public Cloud Integration
    4. Disaster Recovery
  3. Automation Use Cases
    1. Self Service IT
    2. Fast Application Deployment of Template
  4. Business Values deriving Use Cases
    1. Island of Unused Compute Capacity by leveraging Stretch and Bridge
    2. Reducing Capital Outlay in expensive Hardware Devices

Those are 16 new use cases or additional use cases that we can discuss with customers if we would like to talk how VMware NSX can make their life easier now. I will leverage on the use cases later on, or you can contact VMware Inc. or their partners to help you solve your issues and put a small and easy step to modernize your data center!


Kind Regards,
Doddi Priyambodo

Penjelasan Detail mengenai my INTEL-NUC based VMware Home-Lab untuk ngoprek vSphere 6.5, NSX, VIO, Kubernetes, dan PKS – #IntelNucSkull #i7

This time, saya ingin melanjutkan posting saya sebelumnya yang ada disini mengenai Home Lab. Berikut ini adalah postingan2 saya sebelumnya yang menjelaskan mengenai Home Lab yang saya miliki dan juga beberapa tutorial yg saya coba di Home Lab saya:

Anyway, saya akan menjelaskan beberapa hal mengenai instalasi INTEL-NUC yang saya miliki sebagai Home Lab aktif yang saya gunakan untuk mengoprek VMware products seperti NSX, VIO, VIC, VRNI, dan nantinya PKS.

Saya sangat ingin menggunakan mini server ini sebagai portable mini lab yang bisa dibawa2 untuk memenuhi hobby “ngoprek” saya.

Hobby ini bisa saya salurkan dan dapatkan di INTEL-NUC yang saya pegang saat ini. Beberapa alasan sudah saya jelaskan di postingan saya sebelumnya (baca link diatas, red). Selain instalasi yang telah saya lakukan diatas VMware Workstation on my laptop dan my home PC sebagai nested installation sebelumnya. Berhubung instalasi NSX membutuhkan resource yang cukup besar, jadi I think this would be better to use dedicated hardware untuk melakukan instalasi ini. Inilah salah satu alasan kenapa memilih INTEL-NUC selain melakukan instalasi di laptop saya.

Strategi yang akan kita gunakan adalah membuat INTEL-NUC ini sebagai parent host dari beberapa Nested ESXi yang akan kita gunakan. In summary:

  • Use Intel NUC as Parent Host =
  • Create beberapa administrasi VMs, seperti NTP, DNS, AD, PSC, vCenter, dll.
  • Create Nested ESXi sebagai datacenter 1 =
  • Create Nested ESXi sebagai datacenter 2 =

Berikut ini adalah capture dari Intel NUC yang akan dikonfigurasi untuk VMware SDDC:

Spesifikasi dari Intel NUC ini sudah diupgrade sampai kapasitas maksimum yg bisa dihandle oleh server ini. Berikut ini adalah screenshot DCUI-nya untuk menggambarkan spesifikasi-nya: (in summary, processor: 4 physical CPU cores with multithread capability, memory:32 GB RAM, disk:480 GB SSD).

Berikut ini adalah spesifikasi detail untuk mini server ini:

  • Processor: 6th generation Intel Core i7-6770HQ processor (2.6 to 3.5 GHz turbo, Quad Core, 6 MB Cache, 45W TDP)
  • System Memory: 32GB (Kingston DDR4 2133)
  • Storage: Intel M.2 480GB 540 series (spare M.2 slot for additional capacity)
  • Peripheral Connectivity:
    • Intel Gigabit LAN
    • One Thunderbolt 3 port with USB 3.1
    • Four Super Hi-Speed USB 3.0 ports
    • One HDMI 2.0 port and One Mini DisplayPort

Screen Shot 2017-12-07 at 14.42.26

Sebelumnya, kita perlu melakukan Design dari Data Center yang akan kita bangun. Secara garis besar design-nya akan berbentuk seperti ini:

Dengan detail sebagai berikut:

  • Management Cluster
Type Name Hostname IP Address Username Password Remarks
Host p-esxi50 p-esxi50.
corp.local root VMware1! ESXi
VM dns-ntp dns-ntp.
corp.local root VMware1!
VM vcsa vcsa-106.
corp.local root VMware1! vCenter Server
VM nsxmgr nsxmgr-106.
corp.local root VMware1! NSX Manager
VM psc psc-106.
corp.local root VMware1!


  • Compute Cluster
Type Name Hostname IP Address Username Password Remarks
Host n-esxi51 n-esxi51.
corp.local root VMware1! Nested ESXi
Host n-esxi52 n-esxi52.
corp.local root VMware1! Nested ESXi
VM nsx-esg,

root VMware1!
NSX Edge
VM nsx-dlr root VMware1!
NSX Edge
VM nsx-controller NSX Controller
VM web01 root VMware1! 3-Tier App (Web)
VM web02 root VMware1! 3-Tier App (Web)
VM app01 root VMware1! 3-Tier App (App)
VM db01 root VMware1! 3-Tier App (Db)
  • Other additional information (please ignore this, as this is only for my personal note)
    • VIC, VIO, vROps, Log Insight, VRNI

Langkah-langkah instalasi yang perlu dilakukan adalah sebagai berikut:

  1. Lakukan instalasi vSphere ESXi di Intel NUC menggunakan USB Flash Drive
    1. Baca dulu beberapa notes dari sini (, karena ada beberapa parameter yang perlu di-disable di BIOS agar instalasi di Intel NUC dapat berjalan dengan baik.
    2. Lakukan instalasi ESXi di Intel NUC, sebelumnya kita perlu buat bootable USB flash drive for ESXi installation dengan Rufus (silahkan download dari sini: – dan ikuti guidance dari sini: Lalu lakukan instalasi vSphere ESXi dengan mengikuti guidance ini: (feature walkthrough)
  1. Lakukan instalasi untuk VMware vSphere (ESXi & vCenter) + NSX (NSX Manager & NSX Controller)

Download component dari sini:

Untuk mempercepat proses instalasi dan konfigurasi, karena ini akan digunakan untuk demo & development purpose maka daripada harus satu persatu melakukan instalasi dengan GUI wizard (seperti yang saya lakukan sebelumnya untuk menyiapkan personal lab saya di laptop, please read ….), kita juga bisa menggunakan automation script yang dibuat oleh rekan saya (Wen Bin Tay, Nick Bradford, William Lam) dari VMware.

Berikut ini adalah Step by Step-nya:

  1. vSphere Installation: 
  2. NSX Installation:

Script ini dibuat menggunakan PowerCLI yang merupakan Windows PowerShell interface yang digunakan untuk me-manage VMware vSphere environment (

Secara umum, script ini akan men-deploy VMware’s virtualization platform termasuk vCenter Server Appliance (VCSA), Nested ESXi, NSX components dan contoh aplikasiThree-Tier Web Application. Tapi perlu diingat, bahwa instalasi menggunakan automated script Nested ESXi ini hanya direkomendasikan di environment Development saja. Tidak direkomendasikan untuk dipasang di environment production.

  1. Lihat hasilnya:

Virtual Machines yang ada di Parent Host:

All IP Address Overview:

vCenter Overview:

Screen Shot 2017-12-07 at 15.44.02

  1. DONE


Best Regards,
Doddi Priyambodo

Troubleshooting slow performance on application di atas VMware virtualization

Setelah kita masuk ke dunia IT operation, akan banyak hal-hal operasional yang membutuhkan troubleshooting. Biasanya disebabkan karena slow performance dari sebuah aplikasi. Jika ini terjadi di virtualization environment, maka kita perlu memastikan bahwa infrastructure yang ditangani mampu memberikan jaminan SLA yang sudah kita sepakati sebelumnya.

Berikut ini adalah beberapa key area yang perlu diperhatikan untuk melakukan troubleshooting sebuah VM, secara high level:
1. Ensure bahwa ini bukan dari sisi aplikasi by working together juga dgn tim apps – logic of apps, memory leak, efficient I/O commands, etc.
2. Coba pastikan di sisi infra dari VM dan infra di belakangnya (compute, storage, network)

Berikut ini adalah hal yang bisa kita lakukan pada saat troubleshooting:

1. Cek kesehatan dari Virtual Machines

Capacity Issues (Example) Non Capacity Issues (Example)
•CPU Demand > 90%

•CPU Run Queue > 3 per vCPU

•CPU Swap Wait high, CPU IO Wait high

•RAM Free < 250 MB

•RAM Committed > 70%

•Page-In Rate is high

•Disk Queue Length > ___

•Disk IOPS or Throughput or OIO is high

•Low disk space

•Network Usage is high

•Wrong driver (storage driver, network driver) or its settings

•Too many snapshots or large snapshots

•Tools not running

•VM vCPU Usage unbalanced

•App configured wrongly, not-indexed

•Memory Leak

•Network Latency is high or TCP retransmit

•VM too big, process ping-pong, high context switch

•NUMA effect

•Guest OS power setting

2. Cek kesehatan dari Infrastructure layer

 Infra is unable to Cope (Example) Other Issues (Example)
•ESXi CPU insufficient: Demand > 90%, VM CPU Co-Stop >1%, CPU Ready >5%, no of cores to small for VM

•ESXi RAM insufficient: VM Balloon active, VM RAM Swap-in is high, NUMA migration

•ESXi Disk IOPS or Throughput is high

•ESXi vmkernel queue or latency is high

•Datastore latency is high

•ESXi vmnic usage is high

•VM was vMotion

•ESXi vmnic dropped packets or generate errors

•ESXi wrong configuration: power management, multi-pathing, driver version, queue depth setting

•Hardware fault: disk soft error, bad sector, RAM error,

Next question adalah how to check those parameters as fast you can, and as easy as you can to do the troubleshooting and solve the issues that you are facing right now. Well, jawaban yang paling cepat adalah dengan merujuk pada alat  bantu yang saya bahas di posting saya sebelumnya, yaitu dengan menggunakan VMware vRealize Operations Manager.


Kind Regards,
Doddi Priyambodo


Tutorial Instalasi Server DNS, NTP, NFS, iSCSI, VMware vSphere 6.5 (ESXi dan vCenter) di VMware Fusion for Mac atau VMware Workstation for Windows/Linux untuk Virtualization Home Lab.

Melanjutkan posting sebelumnya, kali ini saya akan menulis dalam bahasa Indonesia. Karena saya rasa sudah banyak yang menulis artikel seperti ini dalam bahasa Inggris, tapi jarang dalam bahasa Indonesia. Merayakan launching terbaru dari vSphere 6.5, maka kali ini saya akan melakukan instalasi dan konfigurasi dari komponen tersebut di VMware Fusion MacBook Pro milik saya. Tulisan ini sebenarnya untuk melengkapi tulisan dari posting2 saya sebelumnya mengenai fasilitas Home Lab yang saya miliki untuk bereksplorasi mengenai VMware teknologi yang saya miliki:

Let’s go straight to the point. Berikut ini adalah langkah2nya untuk menyiapkan VMware Home Lab di laptop MacBook Pro menggunakan Nested Virtualization dengan VMware Fusion untuk teknologi virtualisasinya. (bisa juga digunakan untuk melakukan instalasi di VMware Workstation pada Windows OS atau Linux OS, atau bahkan juga kalau mau di-install diatas vSphere ESXi).

  1. Siapkan DNS Server, NTP Server (saya menggunakan CentOS)
  2. Siapkan LDAP Server atau Active Directory (saya menggunakan OpenLDAP) – optional
  3. Siapkan centralized storage server (saya menggunakan NFS dan iSCSI dari OpenFiler)
  4. Instalasi ESXi 6.5 – yuk kita langsung rasakan fitur2 terbarunya!
  5. Install and Configure vCenter 6.5 dengan embedded PSC (saya akan melakukan instalasi di Fusion, bukan di ESXi)
  6. (other post) Instalasi dan Konfigurasi vSphere Integrated Container 1.0 & Harbor – I already did this earlier, please read my previous post in [here]
  7. (other post) Instalasi dan Konfigurasi vRealize Operations 6.4
  8. (other post) Instalasi dan Konfigurasi Log Insight 4.0
  9. (other post) Instalasi dan Konfigurasi vRealize Automation 7.2
  10. (other post) Instalasi dan Konfigurasi NSX and vRealize Network Insight! – currently not supported for vSphere version 6.5
  11. (other post) Install and Configure vCloud Director for SP 8.10 and vSphere Integrated Openstack 3 – currently not supported for vSphere version 6.5

Berikut ini adalah CPU and Memory Dimensioning, IP address & Credentials yang akan digunakan: (mohon diingat bahwa dimensioning ini tanya saya gunakan di home-lab, jika ingin di-deploy di production maka lakukan dimensioning cpu, memory, dan storage dengan lebih proper)


Okay, let’s do this tutorial step by step.

1. Instalasi dan Konfigurasi DNS Server, NTP Server, dan OpenLDAP

Kali ini kita akan melakukan tutorial  langkah demi langkah untuk membuat DNS server menggunakan paket BIND di CentOS 7, lalu dilanjutkan menggunakan NTP Daemon untuk NTP server. CentOS 7 yang saya gunakan adalah versi linux dengan paket yang minimalist, karena hanya digunakan sebagai server pendukung saja yaitu untuk DNS dan NTP. Karena vSphere dan komponen VMware lainnya seperti NSX sangat bergantung pada service DNS dan NTP, serta Active Directory (OpenLDAP) as an optional requirement. Ikuti langkahnya sebagai berikut.

Download dan Deploy latest version of CentOS minimal package (636 MB, CentOS-7-x86_64-Minimal-1503-01.iso)

Install BIND as DNS server

Check the hostname configuration in your DNS machine

# hostnamectl status
# hostnamectl set-hostname

Update the repository in your linux OS lalu install paket BIND

# yum update -y
# yum install bind

Buka dengan  editor, and change the configuration file

# vim /etc/named.conf
options {
listen-on port 53 { any; };
directory “/var/named”;
dump-file “/var/named/data/cache_dump.db”;
statistics-file “/var/named/data/named_stats.txt”;
memstatistics-file “/var/named/data/named_mem_stats.txt”;
allow-query { any; };
recursion yes;
dnssec-enable no;
dnssec-validation no;

Tambahkan baris ini di file konfigurasi BIND /etc/named.conf tersebut

zone "" IN {
type master;
file "forward.lab.bicarait";
allow-update { none; };

zone "" IN {
type master;
file "reverse.lab.bicarait";
allow-update { none; };
  • Buat file baru untuk file forward dari konfigurasi DNS kita.
# vim /var/named/
$TTL 604800
@   IN  SOA (
2011071001  ; Serial
3600        ; Refresh
1800        ; Retry
604800      ; Expire
86400       ; Minimum TTL
@       IN  NS
@       IN  A 
domain01        IN      A
nas01           IN      A
ldap01          IN      A
esxi01          IN      A
esxi02          IN      A
esxi03          IN      A
vc01            IN      A
vrops01         IN      A
vrlog01         IN      A
  • Buat file baru untuk file reverse dari konfigurasi DNS kita.
# vim /var/named/
$TTL 86400
@   IN  SOA (
2011071001  ;Serial
3600        ;Refresh
1800        ;Retry
604800      ;Expire
86400       ;Minimum TTL
@       IN      NS
@       IN      PTR
domain01        IN      A
nas01           IN      A
esxi01          IN      A
esxi02          IN      A
esxi03          IN      A
vc01            IN      A
vrops01         IN      A
vrlog01         IN      A
vrni01          IN      A
2       IN      PTR
3       IN      PTR
11      IN      PTR
12      IN      PTR
13      IN      PTR
21      IN      PTR
31      IN      PTR
32      IN      PTR
33      IN      PTR
  • Check to verify the configuration
# named-checkconf /etc/named.conf
# named-checkzone /var/named/
# named-checkzone /var/named/
  •  Nyalakan service dari DNS BIND
# systemctl enable named
# systemctl start named
# systemctl status named
  •  Ijinkan DNS port 53 in the system
# firewall-cmd --permanent --add-service=dns
# firewall-cmd --permanent --add-port=53/tcp
# firewall-cmd --permanent --add-port=53/udp
# firewall-cmd --reload
  •  Lakukan perubahan untuk permission di file
# chgrp named -R /var/named
# chown -v root:named /etc/named.conf
# restorecon -rv /var/named
# restorecon /etc/named.conf
  •  Check file konfigurasi dari client dengan cara rubah dulu file /etc/resolv.conf dan tambahkan parameter nameserver untuk menuju IP DNS yang baru saja kita konfigurasi diatas. Setelah itu lakukan perintah # dig atau # nslookup.
# nslookup


2. Instalasi dan Konfigurasi NTPD as NTP server

  • Install paket NTP daemon
# yum install ntp
  •  Rubah file konfigurasi dari NTP
# vim /etc/ntp.conf
driftfile /var/lib/ntp/drift
restrict mask nomodify notrap
server iburst
server iburst
server iburst
includefile /etc/ntp/crypto/pw
keys /etc/ntp/keys
disable monitor
logfile /var/log/ntp.log
  • Jalankan perintah ini untuk setting permission dan automation-nya, serta pengetesan
# firewall-cmd --add-service=ntp --permanent
# firewall-cmd --reload
# systemctl start ntpd
# systemctl enable ntpd
# systemctl status ntpd
# ntpq -p atau # date -R atau # ntpdate namaserver


3. Siapkan LDAP Server sebagai Single Sign On database User Directories anda

Lakukan instalasi untuk OpenLDAP, kali ini saya menggunakan TurnKey LDAP. (optional)

  • Download dan Deploy TurnKey LDAP OVA package.
  • Masukkan password untuk root dan openldap user.
  • Masukkan domain dari LDAP =
  • Configure IP address, Gateway, dan DNS


  • Buka lalu masukkan user login cn=admin,dc=lab,dc=bicarait,dc=com
  • Tambahkan beberapa user untuk active directory organisasi anda


4. Siapkan NFS Server dan iSCSI Server untuk Shared Storage

Untuk dapat menggunakan features VMware vMotion, DRS, High Availability maka kita perlu memiliki shared storage yang dapat diakses oleh semua ESXi servers. Berikut ini adalah tutorial untuk melakukan instalasi dan konfigurasi NFS dan iSCSI sebagai Shared Storage. Parameter yang saya gunakan disini adalah sesuai dengan environment home lab yang saya miliki.

  • Download dan deploy latest version of Openfiler ke Fusion
  • Setelah selesai, buka login dengan user default yaitu openfiler/password (jika belum diganti)
  • Enable service untuk NFS Server dan iSCSI Target di Openfiler: Menu – Services


  • Ijinkan akses untuk sistem ini dari mana saja: Menu – Systems – Network Access Configuration


  • Tambahkan virtual harddisk di Fusion untuk tempat penyimpanan NFS atau iSCSI.
  • Tambah New Harddisk dari Fusion, tambahkan 1 harddisk untuk NFS dan 1 harddisk untuk iSCSI
  • Di menu Openfiler, masuk ke: Menu – Volumes – Block Devices – click hyperlink dari /dev/sdb dan /dev/sdc


Lanjutkan enable konfigurasi untuk NFS file system:

  • Klik menu volumes-volume group, masukkan nama volume (nfs) lalu centang /dev/sdb1.
    Then klik tombol add volume group.


  • Klik menu add volume, pilih volume nfs. lalu isi deskripsi volume name, size, dan file system.


  • Klik menu shares di atas, lalu klik folder terakhir dan masukkan nama subfolder.


  • Klik folder yang baru dibuat tadi, lalu klik tombol Make Share. Berhubung ini adalah home-lab, kita bisa set public access. Lalu pilih RW untuk host akses buat NFS.


  • Selanjutnya NFS ini dapat diakses dari ESXi. Dengan alamat sebagai berikut:
    IP=, Folder=/mnt/nfs/vol1/data

Lanjutkan konfigurasi untuk iSCSI file system:

  • Klik menu Volumes-Volume Group. Masukkan nama volume (iscsi) lalu centang /dev/sdc1. Lalu klik tombol add volume group.


  • Pilih menu add volume di menu kanan, lalu pilih combobox iscsi dan klik tombol change. Isi deskripsi volume name, size, dan file system pilih untuk Block version


  • Klik link add iscsi target di menu kanan. hanya ada satu pilihan tersedia, lalu pilih add.


  • Klik menu LUN mapping, lalu klik tombol Map


  • Klik menu Natwork ACL lalu klik tombol Allow untuk mengijinkan akses


  • Selanjutnya konfigurasi iSCSI adapter dari ESXi dari menu Configuration-tab – Storage Adapters – Add. Pilih Target di iSCSI Software Adapter lalu masukkan IP= dan default port 3260


5. Siapkan vSphere ESXi Server sebagai Nested Virtualization diatas Mac Fusion

  • Cara instalasi ESXi diatas Fusion ini sama persis dengan cara instalasi diatas x86 Servers
  • Siapkan ESXi, saya menggunakan latest version yaitu versi 6.5.
  • Tutorial untuk melakukan instalasi ini ada banyak sekali material yang beredar di internet, silahkan dicari di
  • Installer-nya silahkan didownload di


  • Saya akan skip penjelasan untuk tutorial instalasi ini. Langsung akan masuk ke bagian konfigurasi saja dari vCenter. Saya hanya akan masukkan beberapa screenshots saja disini untuk hasil instalasinya.


  • Ada hal baru di versi 6.5 ini, yaitu kita bisa mengakses langsung ESXi tanpa melalui vSphere Client C# desktop supertitles versi sebelumnya. Tapi bisa diakses langsung dari URL web page.



6. Siapkan vSphere vCenter Server sebagai Centralized Management 

Kali ini saya akan melakukan instalasi di vCenter di Fusion bukan langsung di ESXi, jika ingin melakukan instalasi di atas ESXi maka lakukan sesuai guidance yang ada di (silahkan dicari di google, cukup simple kok). Anyway, untuk dapat melakukan instalasi vCenter di Fusion, maka ada beberapa hal yang perlu dilakukan/di tweak secara manual jadi agak sedikit berbeda jika instalasi langsung dilakukan diatas ESXi. Begini step by step-nya:

  • Download dan extract file ISO (VMware-VCSA-all-6.5.0-4602587.iso) dari vCenter 6.5 di MacBook. Silahkan download dari
  • Import file di dalam directory vcsa/ yaitu vmware-vcenter-server-appliance-xxxxx.ova ke Fusion. Tapi jangan klik Finish dulu setelah selesai instalasinya agar Virtual Machine-nya tidak menyala.
  • Rubah file *.vmx di dalam Folder virtual machine hasil deployment tadi. Tambahkan baris ini:
guestinfo.cis.deployment.node.type = "embedded"
guestinfo.cis.vmdir.domain-name = "vsphere.local" = "Default-First-Site"
guestinfo.cis.vmdir.password = "VMware1!" = "ipv4" = "" = "" = "24" = "static" = "" = ""
guestinfo.cis.appliance.root.passwd = "VMware1!"
guestinfo.cis.appliance.ssh.enabled = "true"
hard-disk.hostBuffer = "disabled"
prefvmx.minVmMemPct = 25

Notes: Perhatikan jangan sampai tanda ini ” berubah menjadi ini “ – karena ini akan menyebabkan error “Dictionary Problem” ketika VM akan dinyalakan (saya sempat mengalami ini).

  • Okay, sekarang klik Finish dan VM akan menyala. Anda akan disambut oleh logo Linux Photon sebagai based dari VMware appliance ini. Rubah IP address bisa dilakukan lagi jika diinginkan di bagian Customize System (Klik F2) di menu DCUI.
  • Lanjutkan konfigurasi vCenter dengan membuka halaman


  • Oiya sebelumnya, lakukan pengecekan DNS terlebih dahulu untuk memastikan bahwa record telah tersimpan dan bisa di-resolve oleh vCenter. Masuk via SSH ke vCenter, lakukan nslookup checking ke DNS server.
# ssh-keygen -R
# ssh root@

# nslookup
# nslookup

Berikut ini adalah beberapa screenshots untuk konfigurasi vCenter:

  • Summary Installation vCenter:


Berikut ini adalah beberapa hasil setelah kita melakukan konfigurasi dari vCenter:

  • Appliance Administration Web Page.


  • vCenter Web Page


Let’s continue postingan ini di lain waktu untuk memasukkan ESXi ke vCenter dan melakukan konfigurasi shared storage yang sudah kita buat tadi di ESXi. Lalu melakukan konfigurasi untuk virtual machines, dan lain-lain. Dan tentunya bagaimana melakukan design yang proper untuk instalasi dan konfigurasi vSphere di production. Karena men-design di Home Lab sangat jauh berbeda dengan cara kita men-design di production environment! (ex: cluster design, HA design, security design, performance design, etc)


Kind Regards,
Doddi Priyambodo

Explanation about How CPU Limit and CPU Reservation can Slow your VM (if you don’t do a proper sizing and analysis)

In this post, I would like to share about CPU limit and CPU reservation configuration in vSphere ESXi virtualisation technology.

Actually those features are great (since the configuration also available in vCloud Director (*it will call the configuration in vCenter)). Those features are great if you really know and already consider on how to use it properly. For example, if you would like to use CPU reservation please make sure that you are not running those VMs in a fully contention/overcommitment environment. For CPU limit, if you have application that is always consume 100% of CPU even though you always give more CPU to the VM – then you can use Limit configuration to limit the usage of the CPU by that application (but, for me the Best Way is ask your Developer to Fix the Application!).

Okay, let’s talk more about CPU Limit.

Duncan Epping and Frank Denneman (both are the most respectable VMware blogger), once said that: “Look at a vCPU limit as a restriction within a specific time frame. When a time frame consists of 2000 units and a limit has been applied of 300 units it will take a full pass, so 300 “active” + 1700 units of waiting before it is scheduled again.”

So, applying a limit on a vCPU will slow your VM down no matter what. Even if there are no other VMs running on that 4 socket quad core host.

Next, let’s talk more about CPU Reservation.

Josh Odgers (another virtualisation blogger) also explained that CPU reservation “reserves” CPU resources measured in Mhz, but this has nothing to do with the CPU scheduler. So setting a reservation will help improve performance for the VM you set it on, but will not “solve” CPU ready issues caused by “oversized” VMs, or by too high an overcommitment ratio of CPU resources.

The configuration of Limit and Reservation are done outside the Guest OS, so your Operating System (Windows/Linux/etc) or your Application (Java/.NET/C/etc) do not know that. Your application will ask the resource based on the allocated CPU to that VM.
You should minimize the use of Limit and Reservation as it makes the operation more complex.


Better use the feature of default VMkernel which already got a great scheduler functionality that will take fairness into account. Actually, you can use CPU share configuration if you want to prioritise the VM other than others.

But, the most important thing is: “Please Bro…, Right Size Your VM!”


Kind Regards,
Doddi Priyambodo


Can Not Connect / Error Connecting to iSCSI SAN

Sorry to disturb the tutorial about cloud native application, just a quick note about the troubleshooting.

I found an issue today regarding my iSCSI connection to the datastore. All hosts are all having this error when trying to connect to the SAN. This is because I played with my Lab a lot! and tried to remove and add the NIC of my Fusion and also my Host.

Error messages looks something like this:

Call "IscsiManager.QueryBoundVnics" for object "iscsiManager" on ESXi / vCenter failed.

The problem is solved with the following:

1. Disabled the iSCSI software adapter (backup your iqn and settings)
2. Navigate to /etc/vmware/vmkiscsid/ of the host and backup the files
3. Delete the contents in /etc/vmware/vmkiscsid/
4. Reboot the host
5. Create a new software iscsi adapter, write the IQN with the old one we backup earlier
6. Add iscsi port bindings and targets.
7. DONE.


Kind Regards,
Doddi Priyambodo

Running your Docker in Production Environment using VMware vSphere Integrated Containers – (Part 2)

Following our tutorial, now we will continue to do the installation and configuration for those components.

So, rephrasing previous blog post. By utilising vSphere Integrated Containers, now Developers can use their docker commands to manage the development environments, also functionalities are enriched with specific container management portal (VMware Admiral) and enterprise features container registry (VMware Harbor). System administrator can still use their favourite management tool to manage the infrastructure, such as vCenter and also vRealize Operations plus Log Insight to manage the virtual infrastructure in a whole holistic view. Shown in the diagram below:

A traditional container environment use the host/server to handle several containers. Docker has the ability to import images into the host, but the resource is tied to that host. The challenge is sometime that host has a very limited set of resources. To expand resource on that host, then we need to shutdown the host and then the containers. Then we need to add resource for that physical/virtual machine before more containers can be powered deployed. Another challenge is the container is not portable as it can not be moved to another host since it is very tight to the OS kernel of the container host.

Another concerns other than resources, already explained in my earlier post regarding some enterprise features if we would like to run docker in production environment such as security, manageability, availability, diagnosis and monitoring, high availability, disaster recovery, etc. VIC (vSphere Integrated Containers) can give the solution for all those concerns by using resource pool as the container host and virtual machines as the containers. Plus with new features of vSphere 6 about Instant Clone now VIC can deliver “instant on” container experience alongside the security, portability, and isolation of Virtual Machine. Adding extra hosts in the resource pool to dynamically increase infra resources, initiate live migration/vMotion, auto placement/Distributed Resource Scheduler, dedicated placement/affinity, self healing/High Availability, QoS/weight, quota/limit, guarantee/reservation, etc will add a lot of benefits to the docker environment.

So, these are our steps to prepare the environments for vSphere Integrated Containers (VIC).

  1. Installation and configuration of vSphere Integrated Containers
  2. Installation and configuration of Harbor
  3. Installation and configuration of Admiral

So, let’s start the tutorial now.

Checking the Virtual Infrastructure Environments

  • I am running my virtualisation infrastructure in my Mac laptop using VMware Fusion Professional 8.5.1.
  • Currently I am using vSphere ESXi Enterprise Plus version 6 update 2, and vCenter Standard version 6 update 2.
  • I have NFS storage as my centralised storage, NTP, DNS and DHCP also configured in another VM.


Installation of vSphere Integrated Containers (VIC)

There are two approach to install VIC. This is the first one: (I use this to install on my laptop)

  1. Download the installation source from github =
  2. You will download the vic from the pull command using git. First install the git components from here =
  3. Run this command = $ git clone
  4. After downloaded go to the directory = $ cd vic
  5. Now, build the binaries using this command =
    docker run -v $(pwd):/go/src/ -w /go/src/ golang make all

OR, you can do the second approach: (I use this to install on my VM)

  1. Download binary file from here =
  2. In this personal lab, I am using this binary =
  3. Download that binary to the Virtual Machine that you will be used for VIC Management Host.
  4. Extract the file using = $ tar -zxvf vic_6511.tar.gz.  NOTE:You will see the latest build as shown here. The build number “6511” will be different as this is an active project and new builds are uploaded constantly.

Okay, you already installed the installer now. In those steps above, there are three primary components generated by a full build, found in the ./bin directory by defaul). The make targets used are the following:

  1. vic-machine – make vic-machine
  2. appliance.iso – make appliance
  3. bootstrap.iso – make bootstrap

Okay, after this we will Deploy our Virtual Container Host in VMware environments (I am using vCenter with ESXi as explained earlier). The installation can run on dedicated ESXi host too (without vCenter) if needed.

Now, continue to create the Virtual Container Host in the vCenter. Since I am using Mac, I will use command prompt for mac.

$ ./vic-machine-darwin create --target --compute-resource --user administrator@vsphere.local --password VMware1! --image-store ds_fusion_01 --no-tlsverify --name virtualcontainerhost01 --bridge-network dvPgContainer01 --force


After that command above, let’s check the condition of our virtual infrastructure from vCenter now. Currently we will see that we have a new resource pool as the virtual container host, and a vm as an endpoint vm as a target of the container host.


Okay, installation is completed. Let’s try to deploy a docker machine into the VIC now.

docker -H --tls info


After that, let’s do the pull and run command for the docker as normal operation same as my previous posts.
$ docker -H --tls \
--tlscert='./docker-appliance-cert.pem' \
--tlskey='./docker-appliance-key.pem' pull vmwarecna/nginx

$ docker -H --tls \
--tlscert='./docker-appliance-cert.pem' \
--tlskey='./docker-appliance-key.pem' run -d -p 80:80 vmwarecna/nginx

Note: for production, we must use the *.pem key to connect to the environment. Since this is my development environment, so I will skip that.


Okay, now finally… this is a video to explain the operational of vSphere Integrated Container, VMware Admiral, and VMware Harbor (I already explained about Admiral and Harbor in my previous blog post in here):


Kind Regards,
Doddi Priyambodo


Running your Docker in Production Environment using VMware vSphere Integrated Containers – (Part 1)

In this tutorial, after explaining about running Docker in my Mac. Now, it’s time to move those dockers on your laptop to production environment. In VMware, we will utilise vSphere ESXi as the production grade virtualisation technology as the foundation of the infrastructure.

In production environment, lot of things need to be considered. From availability, manageability, performance, reliability, scalability, security (AMPRSS). This AMPRSS considerations can be easily achieved by implementing docker container from your development environment (laptop) to the production environment (vSphere ESXi). One of the concern of docker technology is the containers share the same kernel and are therefore less isolated than real VMs. A bug in the kernel affects every container.

vSphere Integrated Containers Engine will allow developers familiar with Docker to develop in containers and deploy them alongside traditional VM-based workloads on vSphere clusters, and allowing for these workloads to be managed through the vSphere UI in a way familiar to existing vSphere admins.

Docker itself is far less capable than actual hypervisor. It doesn’t come with HA, live migration, hardware virtualization security, etc. VIC (VMware Integrated Containers) brings the container paradigm directly to the hypervisor, allowing you to deploy containers as first-class citizens. The net result is that containers inherit all of the benefits of VMs, because they are VMs. The Docker image, once instantiated, becomes a VM inside vSphere. This solves security as well as operational concerns at the same time.

But these are NOT traditional VMs that require for example 2TB and take 2 minutes to boot. These are usually as big as the Docker image itself and take a few seconds to instantiate. They boot from a minimal ISO which contains a stripped-out Linux kernel (based on Photon OS), and the container images and volumes are attached as disks.

The ContainerVMs are provisioned into a “Virtual Container Host” which is just like a Swarm cluster, but implemented as logical distributed capacity in a vSphere Resource Pool. You don’t need to add or remove physical nodes to increase or decrease the VCH capacity, you simply re-configure its resource limits and let vSphere clustering and DRS (Distributed Resource Scheduler) handle the details.

The biggest benefit of VIC is that it helps to draw a clear line between the infrastructure provider (IT admin) and the consumer (developer/ops). The consumer wins because they don’t have deal with managing container hosts, patching, configuring, etc. The provider wins because they can leverage the operational model they are already using today (including NSX and VSAN).

Developers will continue to develop dockers and IT admin will keep managing VMs. The best of both worlds.

It also can be combined with other enterprise tool to manage the Enterprise environment, such as vRealize Operations, vRealize Log Insight, Virtual SAN, VMware NSX, vRealize Automations.

In this post, I will utilise these technologies from VMware:

  • vSphere ESXi 6 U2 as the number one, well-known and stable production grade Virtualisation Technology.
  • vCenter 6 U2 as the Virtualisation central management and operation tool.
  • vSphere Integrated Containers as the Enterprise Production Ready container runtime for vSphere, allowing developers familiar with Docker to develop in containers and deploy them alongside traditional VM-based workloads on vSphere clusters. Download from here: The vSphere Integrated Containers Engine
  • VMware Admiral as the Container Management platform for deploying and managing container based applications. Provides a UI for developers and app teams to provision and manage containers, including retrieving stats and info about container instances. Cloud administrators will be able to manage container hosts and apply governance to its usage, including capacity quotas and approval workflows. Download from here: Harbor
  • VMware Harbor as an enterprise-class registry server that stores and distributes Docker images. Have a UI and functionalities usually required by an enterprise, such as security, identity, replication, and management. Download from here: Admiral

This is the diagram block for those components:

As you can see in the diagram above vSphere Integrated Containers is comprised of three main components, all of which are available as open source on github. With these three capabilities, vSphere Integrated Containers will enable VMware customers to deliver a production-ready container solution to their developers and app teams.


*to be continued in part 2.

Kind Regards,
Doddi Priyambodo