What is Cloud?

Definition of Cloud?

I usually answer this question with this mechanism (from my perspective)
You can ask “What”, “Where”, When”, “How”, “Why” to answer that question.

  1. What? >> it is collection of IT resources (such as compute, storage, artificial intelligence, function, framework, applications, etc)
  2. Where? >> it can be accessed from the internet, so literally anywhere around the earth
  3. When? >> it can be access anytime you want, no time or schedule limitation
  4. How? >> you can use it per-usage based. you want it, you get it. pay as used.

Why Cloud? (I will elaborate this more later on, but it is because of…)

  1. Agility
  2. Utility based cost
  3. Elasticity
  4. Breadth of Services
  5. Go Global in minutes

 

Kind Regards,
Doddi Priyambodo

Berkenalan dengan layanan Artificial Intelligence dan Machine Learning dari Amazon Web Service

If you think Cloud Computing is only about “Hosting your Server” (which a lot of people do)… then, please kindly read again some public materials out there and create a free account at AWS to try it by your self  – since it is way beyond than that!

One of the service that I would like to talk about right now is about the services and platform that are available for Machine Learning purpose – to create an artificial intelligence services for your customers.

At Amazon, artificial intelligence has been investigated for over 20 years. Machine learning (ML) algorithms drive many of our internal systems. It’s also core to the capabilities our customers experience – from the path optimization in our fulfillment centers, and Amazon.com’s recommendations engine, to Echo powered by Alexa, our drone initiative Prime Air, and our new retail experience Amazon Go. This is just the beginning. Our mission is to share our learnings and ML capabilities as fully managed services, and put them into the hands of every developer and data scientist.

Machine Learning Application Services – ready to use functions and building blocks for your advanced applications.

  • Amazon Rekognition makes it easy to add image and video analysis to your applications. You just provide an image or video to the Rekognition API, and the service can identify the objects, people, text, scenes, and activities, as well as detect any inappropriate content. Amazon Rekognition also provides highly accurate facial analysis and facial recognition on images and video that you provide. You can detect, analyze, and compare faces for a wide variety of user verification, people counting, and public safety use cases.
  • Amazon Lex is a service for building conversational interfaces into any application using voice and text. Amazon Lex provides the advanced deep learning functionalities of automatic speech recognition (ASR) for converting speech to text, and natural language understanding (NLU) to recognize the intent of the text, to enable you to build applications with highly engaging user experiences and lifelike conversational interactions.
  • Amazon Polly is a service that turns text into lifelike speech, allowing you to create applications that talk, and build entirely new categories of speech-enabled products. Amazon Polly is a Text-to-Speech service that uses advanced deep learning technologies to synthesize speech that sounds like a human voice.
  • Amazon Comprehend is a natural language processing (NLP) service that uses machine learning to find insights and relationships in text. The service identifies the language of the text; extracts key phrases, places, people, brands, or events; understands how positive or negative the text is; analyzes text using tokenization and parts of speech; and automatically organizes a collection of text files by topic. Using these APIs, you can analyze text and apply the results in a wide range of applications including voice of customer analysis, intelligent document search, and content personalization for web applications.
  • Amazon Transcribe is an automatic speech recognition (ASR) service that makes it easy for developers to add speech-to-text capability to their applications. Using the Amazon Transcribe API, you can analyze audio files stored in Amazon S3 and have the service return a text file of the transcribed speech.
  • Amazon Translate is a neural machine translation service that delivers fast, high-quality, and affordable language translation. Neural machine translation is a form of language translation automation that uses deep learning models to deliver more accurate and more natural sounding translation than traditional statistical and rule-based translation algorithms.

Instance for Deep Learning – ready to use EC2 instance pre-installed with popular deep learning frameworks.

  • AWS Deep Learning AMIs provide machine learning practitioners and researchers with the infrastructure and tools to accelerate deep learning in the cloud, at any scale. You can quickly launch Amazon EC2 instances pre-installed with popular deep learning frameworks such as Apache MXNet and Gluon, TensorFlow, Microsoft Cognitive Toolkit, Caffe, Caffe2, Theano, Torch, PyTorch, Chainer, and Keras to train sophisticated, custom AI models, experiment with new algorithms, or to learn new skills and techniques.

Machine Learning Platform Services – ready to use platform to develop your advanced applications.

  • Amazon Machine Learning is a service that makes it easy for developers of all skill levels to use machine learning technology. Amazon Machine Learning provides visualization tools and wizards that guide you through the process of creating machine learning (ML) models without having to learn complex ML algorithms and technology. Once your models are ready, Amazon Machine Learning makes it easy to obtain predictions for your application using simple APIs, without having to implement custom prediction generation code, or manage any infrastructure.
  • Amazon SageMaker is a fully-managed platform that enables developers and data scientists to quickly and easily build, train, and deploy machine learning models at any scale. Amazon SageMaker removes all the barriers that typically slow down developers who want to use machine learning.
  • AWS DeepLens is deep learning enabled video camera (hardware) for developers. It helps put deep learning in the hands of developers, literally, with a fully programmable video camera, tutorials, code, and pre-trained models designed to expand deep learning skills.

PS: Try it your self to see how easy to start to BUILD your service on top of AWS Cloud Platform (use the FREE account! NOW!) – at amazon we like to say “Get your Hands Dirty!”

 

Kind Regards,
Doddi Priyambodo

Penjelasan Detail mengenai my INTEL-NUC based VMware Home-Lab untuk ngoprek vSphere 6.5, NSX, VIO, Kubernetes, dan PKS – #IntelNucSkull #i7

This time, saya ingin melanjutkan posting saya sebelumnya yang ada disini mengenai Home Lab. Berikut ini adalah postingan2 saya sebelumnya yang menjelaskan mengenai Home Lab yang saya miliki dan juga beberapa tutorial yg saya coba di Home Lab saya:

Anyway, saya akan menjelaskan beberapa hal mengenai instalasi INTEL-NUC yang saya miliki sebagai Home Lab aktif yang saya gunakan untuk mengoprek VMware products seperti NSX, VIO, VIC, VRNI, dan nantinya PKS.

Saya sangat ingin menggunakan mini server ini sebagai portable mini lab yang bisa dibawa2 untuk memenuhi hobby “ngoprek” saya.

Hobby ini bisa saya salurkan dan dapatkan di INTEL-NUC yang saya pegang saat ini. Beberapa alasan sudah saya jelaskan di postingan saya sebelumnya (baca link diatas, red). Selain instalasi yang telah saya lakukan diatas VMware Workstation on my laptop dan my home PC sebagai nested installation sebelumnya. Berhubung instalasi NSX membutuhkan resource yang cukup besar, jadi I think this would be better to use dedicated hardware untuk melakukan instalasi ini. Inilah salah satu alasan kenapa memilih INTEL-NUC selain melakukan instalasi di laptop saya.

Strategi yang akan kita gunakan adalah membuat INTEL-NUC ini sebagai parent host dari beberapa Nested ESXi yang akan kita gunakan. In summary:

  • Use Intel NUC as Parent Host = 192.168.106.50
  • Create beberapa administrasi VMs, seperti NTP, DNS, AD, PSC, vCenter, dll.
  • Create Nested ESXi sebagai datacenter 1 = 192.168.106.51
  • Create Nested ESXi sebagai datacenter 2 = 192.168.106.52

Berikut ini adalah capture dari Intel NUC yang akan dikonfigurasi untuk VMware SDDC:

Spesifikasi dari Intel NUC ini sudah diupgrade sampai kapasitas maksimum yg bisa dihandle oleh server ini. Berikut ini adalah screenshot DCUI-nya untuk menggambarkan spesifikasi-nya: (in summary, processor: 4 physical CPU cores with multithread capability, memory:32 GB RAM, disk:480 GB SSD).

Berikut ini adalah spesifikasi detail untuk mini server ini:

  • Processor: 6th generation Intel Core i7-6770HQ processor (2.6 to 3.5 GHz turbo, Quad Core, 6 MB Cache, 45W TDP)
  • System Memory: 32GB (Kingston DDR4 2133)
  • Storage: Intel M.2 480GB 540 series (spare M.2 slot for additional capacity)
  • Peripheral Connectivity:
    • Intel Gigabit LAN
    • One Thunderbolt 3 port with USB 3.1
    • Four Super Hi-Speed USB 3.0 ports
    • One HDMI 2.0 port and One Mini DisplayPort

Screen Shot 2017-12-07 at 14.42.26

Sebelumnya, kita perlu melakukan Design dari Data Center yang akan kita bangun. Secara garis besar design-nya akan berbentuk seperti ini:

Dengan detail sebagai berikut:

  • Management Cluster
Type Name Hostname IP Address Username Password Remarks
Host p-esxi50 p-esxi50.
corp.local
192.168.106.50 root VMware1! ESXi
VM dns-ntp dns-ntp.
corp.local
192.168.106.10 root VMware1!
VM vcsa vcsa-106.
corp.local
192.168.106.22 root VMware1! vCenter Server
VM nsxmgr nsxmgr-106.
corp.local
192.168.106.23 root VMware1! NSX Manager
VM psc psc-106.
corp.local
192.168.106.21 root VMware1!

 

  • Compute Cluster
Type Name Hostname IP Address Username Password Remarks
Host n-esxi51 n-esxi51.
corp.local
192.168.106.51 root VMware1! Nested ESXi
Host n-esxi52 n-esxi52.
corp.local
192.168.106.52 root VMware1! Nested ESXi
VM nsx-esg 192.168.106.1,

192.168.106.5

root VMware1!
VMware1!
NSX Edge
VM nsx-dlr root VMware1!
VMware1!
NSX Edge
VM nsx-controller 192.168.106.61 NSX Controller
VM web01 172.16.10.11 root VMware1! 3-Tier App (Web)
VM web02 172.16.10.12 root VMware1! 3-Tier App (Web)
VM app01 172.16.20.11 root VMware1! 3-Tier App (App)
VM db01 172.16.30.11 root VMware1! 3-Tier App (Db)
  • Other additional information (please ignore this, as this is only for my personal note)
    • VIC, VIO, vROps, Log Insight, VRNI

Langkah-langkah instalasi yang perlu dilakukan adalah sebagai berikut:

  1. Lakukan instalasi vSphere ESXi di Intel NUC menggunakan USB Flash Drive
    1. Baca dulu beberapa notes dari sini (http://www.virtuallyghetto.com/2016/05/heads-up-esxi-not-working-on-the-new-intel-nuc-skull-canyon.html), karena ada beberapa parameter yang perlu di-disable di BIOS agar instalasi di Intel NUC dapat berjalan dengan baik.
    2. Lakukan instalasi ESXi di Intel NUC, sebelumnya kita perlu buat bootable USB flash drive for ESXi installation dengan Rufus (silahkan download dari sini: https://rufus.akeo.ie/ – dan ikuti guidance dari sini: http://www.virten.net/2014/12/howto-create-a-bootable-esxi-installer-usb-flash-drive/). Lalu lakukan instalasi vSphere ESXi dengan mengikuti guidance ini: (feature walkthrough)
  1. Lakukan instalasi untuk VMware vSphere (ESXi & vCenter) + NSX (NSX Manager & NSX Controller)

Download component dari sini: https://my.vmware.com/group/vmware/get-download?downloadGroup=VSMDS15

Untuk mempercepat proses instalasi dan konfigurasi, karena ini akan digunakan untuk demo & development purpose maka daripada harus satu persatu melakukan instalasi dengan GUI wizard (seperti yang saya lakukan sebelumnya untuk menyiapkan personal lab saya di laptop, please read ….), kita juga bisa menggunakan automation script yang dibuat oleh rekan saya (Wen Bin Tay, Nick Bradford, William Lam) dari VMware.

Berikut ini adalah Step by Step-nya:

  1. vSphere Installation: https://mobilesddc.wordpress.com/mobile-sddc-guide-part-2-vsphere-deployment/ 
  2. NSX Installation: https://mobilesddc.wordpress.com/mobile-sddc-guide-part-3-nsx-deployment/

Script ini dibuat menggunakan PowerCLI yang merupakan Windows PowerShell interface yang digunakan untuk me-manage VMware vSphere environment (https://blogs.vmware.com/PowerCLI/)

Secara umum, script ini akan men-deploy VMware’s virtualization platform termasuk vCenter Server Appliance (VCSA), Nested ESXi, NSX components dan contoh aplikasiThree-Tier Web Application. Tapi perlu diingat, bahwa instalasi menggunakan automated script Nested ESXi ini hanya direkomendasikan di environment Development saja. Tidak direkomendasikan untuk dipasang di environment production.

  1. Lihat hasilnya:

Virtual Machines yang ada di Parent Host:

All IP Address Overview:

vCenter Overview:

Screen Shot 2017-12-07 at 15.44.02

  1. DONE

 

Best Regards,
Doddi Priyambodo

Running your Docker in Production Environment using VMware vSphere Integrated Containers – (Part 2)

Following our tutorial, now we will continue to do the installation and configuration for those components.

So, rephrasing previous blog post. By utilising vSphere Integrated Containers, now Developers can use their docker commands to manage the development environments, also functionalities are enriched with specific container management portal (VMware Admiral) and enterprise features container registry (VMware Harbor). System administrator can still use their favourite management tool to manage the infrastructure, such as vCenter and also vRealize Operations plus Log Insight to manage the virtual infrastructure in a whole holistic view. Shown in the diagram below:

A traditional container environment use the host/server to handle several containers. Docker has the ability to import images into the host, but the resource is tied to that host. The challenge is sometime that host has a very limited set of resources. To expand resource on that host, then we need to shutdown the host and then the containers. Then we need to add resource for that physical/virtual machine before more containers can be powered deployed. Another challenge is the container is not portable as it can not be moved to another host since it is very tight to the OS kernel of the container host.

Another concerns other than resources, already explained in my earlier post regarding some enterprise features if we would like to run docker in production environment such as security, manageability, availability, diagnosis and monitoring, high availability, disaster recovery, etc. VIC (vSphere Integrated Containers) can give the solution for all those concerns by using resource pool as the container host and virtual machines as the containers. Plus with new features of vSphere 6 about Instant Clone now VIC can deliver “instant on” container experience alongside the security, portability, and isolation of Virtual Machine. Adding extra hosts in the resource pool to dynamically increase infra resources, initiate live migration/vMotion, auto placement/Distributed Resource Scheduler, dedicated placement/affinity, self healing/High Availability, QoS/weight, quota/limit, guarantee/reservation, etc will add a lot of benefits to the docker environment.

So, these are our steps to prepare the environments for vSphere Integrated Containers (VIC).

  1. Installation and configuration of vSphere Integrated Containers
  2. Installation and configuration of Harbor
  3. Installation and configuration of Admiral

So, let’s start the tutorial now.

Checking the Virtual Infrastructure Environments

  • I am running my virtualisation infrastructure in my Mac laptop using VMware Fusion Professional 8.5.1.
  • Currently I am using vSphere ESXi Enterprise Plus version 6 update 2, and vCenter Standard version 6 update 2.
  • I have NFS storage as my centralised storage, NTP, DNS and DHCP also configured in another VM.

    screen-shot-2016-11-03-at-15-32-42
    screen-shot-2016-11-04-at-15-11-52

Installation of vSphere Integrated Containers (VIC)

There are two approach to install VIC. This is the first one: (I use this to install on my laptop)

  1. Download the installation source from github = https://github.com/vmware/vic
  2. You will download the vic from the pull command using git. First install the git components from here = https://git-scm.com/downloads
  3. Run this command = $ git clone https://github.com/vmware/vicscreen-shot-2016-11-03-at-18-17-01
  4. After downloaded go to the directory = $ cd vic
  5. Now, build the binaries using this command =
    docker run -v $(pwd):/go/src/github.com/vmware/vic -w /go/src/github.com/vmware/vic golang make all
     screen-shot-2016-11-03-at-18-42-34

OR, you can do the second approach: (I use this to install on my VM)

  1. Download binary file from here = https://bintray.com/vmware/vic-repo/build
  2. In this personal lab, I am using this binary = https://bintray.com/vmware/vic-repo/build/6511#files
  3. Download that binary to the Virtual Machine that you will be used for VIC Management Host.
  4. Extract the file using = $ tar -zxvf vic_6511.tar.gz.  NOTE:You will see the latest build as shown here. The build number “6511” will be different as this is an active project and new builds are uploaded constantly.

Okay, you already installed the installer now. In those steps above, there are three primary components generated by a full build, found in the ./bin directory by defaul). The make targets used are the following:

  1. vic-machine – make vic-machine
  2. appliance.iso – make appliance
  3. bootstrap.iso – make bootstrap

Okay, after this we will Deploy our Virtual Container Host in VMware environments (I am using vCenter with ESXi as explained earlier). The installation can run on dedicated ESXi host too (without vCenter) if needed.


Now, continue to create the Virtual Container Host in the vCenter. Since I am using Mac, I will use command prompt for mac.

$ ./vic-machine-darwin create --target 172.16.159.150/dc1.lab.bicarait.com --compute-resource cls01.dc01.lab.bicarait.com --user administrator@vsphere.local --password VMware1! --image-store ds_fusion_01 --no-tlsverify --name virtualcontainerhost01 --bridge-network dvPgContainer01 --force

screen-shot-2016-11-06-at-21-37-13

After that command above, let’s check the condition of our virtual infrastructure from vCenter now. Currently we will see that we have a new resource pool as the virtual container host, and a vm as an endpoint vm as a target of the container host.

screen-shot-2016-11-06-at-21-45-38


Okay, installation is completed. Let’s try to deploy a docker machine into the VIC now.

docker -H 172.16.159.153:2376 --tls info

screen-shot-2016-11-06-at-22-24-09

After that, let’s do the pull and run command for the docker as normal operation same as my previous posts.
$ docker -H 172.16.159.153:2376 --tls \
--tlscert='./docker-appliance-cert.pem' \
--tlskey='./docker-appliance-key.pem' pull vmwarecna/nginx

$ docker -H 172.16.159.153:2376 --tls \
--tlscert='./docker-appliance-cert.pem' \
--tlskey='./docker-appliance-key.pem' run -d -p 80:80 vmwarecna/nginx

Note: for production, we must use the *.pem key to connect to the environment. Since this is my development environment, so I will skip that.

 

Okay, now finally… this is a video to explain the operational of vSphere Integrated Container, VMware Admiral, and VMware Harbor (I already explained about Admiral and Harbor in my previous blog post in here):

 

Kind Regards,
Doddi Priyambodo

 

Running your Docker in Production Environment using VMware vSphere Integrated Containers – (Part 1)

In this tutorial, after explaining about running Docker in my Mac. Now, it’s time to move those dockers on your laptop to production environment. In VMware, we will utilise vSphere ESXi as the production grade virtualisation technology as the foundation of the infrastructure.

In production environment, lot of things need to be considered. From availability, manageability, performance, reliability, scalability, security (AMPRSS). This AMPRSS considerations can be easily achieved by implementing docker container from your development environment (laptop) to the production environment (vSphere ESXi). One of the concern of docker technology is the containers share the same kernel and are therefore less isolated than real VMs. A bug in the kernel affects every container.

vSphere Integrated Containers Engine will allow developers familiar with Docker to develop in containers and deploy them alongside traditional VM-based workloads on vSphere clusters, and allowing for these workloads to be managed through the vSphere UI in a way familiar to existing vSphere admins.

Docker itself is far less capable than actual hypervisor. It doesn’t come with HA, live migration, hardware virtualization security, etc. VIC (VMware Integrated Containers) brings the container paradigm directly to the hypervisor, allowing you to deploy containers as first-class citizens. The net result is that containers inherit all of the benefits of VMs, because they are VMs. The Docker image, once instantiated, becomes a VM inside vSphere. This solves security as well as operational concerns at the same time.

But these are NOT traditional VMs that require for example 2TB and take 2 minutes to boot. These are usually as big as the Docker image itself and take a few seconds to instantiate. They boot from a minimal ISO which contains a stripped-out Linux kernel (based on Photon OS), and the container images and volumes are attached as disks.

The ContainerVMs are provisioned into a “Virtual Container Host” which is just like a Swarm cluster, but implemented as logical distributed capacity in a vSphere Resource Pool. You don’t need to add or remove physical nodes to increase or decrease the VCH capacity, you simply re-configure its resource limits and let vSphere clustering and DRS (Distributed Resource Scheduler) handle the details.

The biggest benefit of VIC is that it helps to draw a clear line between the infrastructure provider (IT admin) and the consumer (developer/ops). The consumer wins because they don’t have deal with managing container hosts, patching, configuring, etc. The provider wins because they can leverage the operational model they are already using today (including NSX and VSAN).

Developers will continue to develop dockers and IT admin will keep managing VMs. The best of both worlds.

It also can be combined with other enterprise tool to manage the Enterprise environment, such as vRealize Operations, vRealize Log Insight, Virtual SAN, VMware NSX, vRealize Automations.

In this post, I will utilise these technologies from VMware:

  • vSphere ESXi 6 U2 as the number one, well-known and stable production grade Virtualisation Technology.
  • vCenter 6 U2 as the Virtualisation central management and operation tool.
  • vSphere Integrated Containers as the Enterprise Production Ready container runtime for vSphere, allowing developers familiar with Docker to develop in containers and deploy them alongside traditional VM-based workloads on vSphere clusters. Download from here: The vSphere Integrated Containers Engine
  • VMware Admiral as the Container Management platform for deploying and managing container based applications. Provides a UI for developers and app teams to provision and manage containers, including retrieving stats and info about container instances. Cloud administrators will be able to manage container hosts and apply governance to its usage, including capacity quotas and approval workflows. Download from here: Harbor
  • VMware Harbor as an enterprise-class registry server that stores and distributes Docker images. Have a UI and functionalities usually required by an enterprise, such as security, identity, replication, and management. Download from here: Admiral

This is the diagram block for those components:

As you can see in the diagram above vSphere Integrated Containers is comprised of three main components, all of which are available as open source on github. With these three capabilities, vSphere Integrated Containers will enable VMware customers to deliver a production-ready container solution to their developers and app teams.

 

*to be continued in part 2.

Kind Regards,
Doddi Priyambodo

Running your First Cloud Native Applications using Docker Container in Mac

As previous post, I will elaborate about Cloud Native Applications. But before that, I will post some basic concepts about Docker as the Container technology for Cloud Native Applications approach.

Docker is an open platform for developing, shipping, and running applications. Docker enables you to separate your applications from your infrastructure so you can deliver software quickly. With Docker, you can manage your infrastructure in the same ways you manage your applications. By taking advantage of Docker’s methodologies for shipping, testing, and deploying code quickly, you can significantly reduce the delay between writing code and running it in production.

If you are still confused about description of Docker, Microservices, Cloud Native Applications technology means. You can read it in here: http://bicarait.com/2016/05/31/microservices-cloud-native-applications/ 

In this post, I will start with the basic on how to run your first application in Docker that will be provisioned in your Mac laptop. Then, I will do that also in vSphere Integrated Container and also VMware Photon Platform.


Let’s Start the first Chapter: INSTALLATION 

  1. Download your Docker Engine from this URL (stable version): Get Docker for Mac (stable)
  2. Actually there are two approach to run docker on your Mac. The 1st one is to utilise Docker for Mac (which we will do this), and the second one is to utilise Docker Toolbox. The difference is in Docker for Mac approach, we will utilise HyperKit as lightweight virtualisation technology to run the container. Docker Toolbox will utilise Virtualbox as the virtualisation technology.
  3. Actually you can run both Docker for Mac and Docker Toolbox approach at the same time in your MacOS, but there are several things that you need to do, such as create different environment (set and unset command). I will not elaborate that in this post.
  4. Assume that your machine is empty for Docker engine.Docker Tutorial
  5. Install and Run Docker. Double click Docker.img that you have downloaded earlier to start the installation.screen-shot-2016-10-31-at-15-29-46
  6. Check Docker version that is now running on your Mac after the installation is completed.screen-shot-2016-10-31-at-15-34-06
  7. Let’s start with your basic application. Let’s do nginx web server using docker.
  8. Check your http://localhost first to check the status.screen-shot-2016-10-31-at-15-54-28
  9. Basically, docker will try to run the source of your application locally. But if docker can not find it, then it will search through the public repository (default configuration is docker hub).screen-shot-2016-10-31-at-15-55-21
  10. Check your http://localhost now to check the status.screen-shot-2016-10-31-at-15-56-03
  11.  Check the status of the docker using  docker ps command. If you want to stop the web server, do docker stop webserver and start the web by docker start webserver
  12. If you want to stop and remove the container, use the command docker rm -f webserver. If you want to delete the local images do the command docker rmi nginx. But before that, you can list the local images using docker images.screen-shot-2016-10-31-at-16-00-14
  13. If you want to use another docker repository other than https://hub.docker.com or do a file sharing from your Mac to your docker engine, you can also configure that in the Docker for Mac menu.screen-shot-2016-10-31-at-16-17-40

Let’s Continue with the second Chapter: BOARDING YOUR APPS 

For this example we will utilise Docker Compose to run WordPress in an isolated environment. Compose is a docker tool for running multi containers environment. We will create a compose file, and then execute the YAML file using docker-compose command.

  1. Create a directory for the project in your Mac.
  2. screen-shot-2016-11-01-at-18-51-10Create a docker compose file. This will include wordpress and mysql to create a simple blog website.screen-shot-2016-11-01-at-18-53-49
  3. Now, build the project using the command $ docker-compose up -d
  4. screen-shot-2016-11-01-at-18-56-35Check whether the images already installed and run. Using docker images and docker ps command.
  5. screen-shot-2016-11-01-at-19-02-49Finally, test to open the wordpress in your browser. Because we put the configuration in port 8000, then we will open http://localhost:8000
  6. Do the installations of wordpress using the UI wizard, then finally open the created site.screen-shot-2016-11-01-at-19-01-32

 

Kind Regards,
Doddi Priyambodo

VMware Photon Platform or vSphere Integrated Container

Cloud Native Applications implementation using container technology is hardly to ignore if you want to keep up with this culture of agile and fast innovations. VMware have two approaches to support for this initiative. Either to use vSphere Integrated Container approach or VMware Photon Platform approach.

So, what are the differences? In Summary:

  • If you want to run both containerized and traditional workloads in production side by side on your existing infrastructure, VIC is the ideal choice. VIC extends all the enterprise capabilities of vSphere without requiring additional investment in retooling or re-architecting your existing infrastructure.
  • If you are looking at building an on-prem, green field infrastructure stack for only running containerized workloads, and also would like a highly available and scalable control plane, an API-driven, automated DevOps environment, plus multi-tenancy for creation and isolation resources, Photon Platform is the way to go.

In this couple of weeks, I will elaborate more about this cloud native applications. Please wait for my next posts.

So, these are the plan:
1. Run Docker Apps in the laptop (for my case, I will use Mac)
We will utilise: Mac OS, Docker, Swarm.
2. Run Docker Apps in vSphere Integrated Container
We will utilise: VMware vSphere, vCenter, Photon OS, Harbor, Admiral.
3. Run Docker Apps in VMware Photon Platform
We will utilise: VMware vSphere, Photon Controller, Photon OS, Kubernetes

 

Kind Regards,
Doddi Priyambodo

Pertanyaan Teknis yang diajukan saat vSphere Design during Requirement Analysis

Saya coba merangkum sekilas saja mengenai beberapa pertanyaan teknis dasar yang biasa diajukan saat kita melakukan Requirement Analysis / Design Workshop engagement dengan customer.

Berikut ini adalah beberapa high level questions yang biasa saya ajukan, dan melakukan penggalian lebih dalam berdasarkan pertanyaan tersebut. (Note: ini adalah pertanyaan2 teknis, jadi bukan diajukan ke business person or C level. So, to find the correct audience is important)

  • Compute: To gather information regarding the planned target Compute infrastructure
  • Storage: To understand the current and expected storage landscape
  • vCenter: To describe the state of vCenter to manage the ESXi environment
  • Network: To gather information around current and target network infrastructure
  • Backup & Patching: To understand the current backup and patching methodology.
  • Monitor: To analyze current and expected the Monitoring processes
  • VM Workloads: To analyzie the details of the current physical workloads to be virtualized and consolidated
  • Security: To understand detail the current security practices.
  • Processes & Operations: To understand the current operation procedures and processes
  • Availlaibility & Disaster Recovery: to gather information on Business Continuity Processes

Breakdown lebih detail dari pertanyaan tersebut diatas, bisa saja dilakukan lebih detail, contohnya sebagai berikut:

  • Compute: tipe hardware, network, disk, merk, redundancy, processor, koneksi storage, booting, automation, scalability, dll
  • Storage: SAN/NAS/iSCSI/NFS/VSAN, IOps, Latency, storage technology, cloning/snapshot, replication, dll
  • vCenter: linked mode, appliance, database decision, disk size, cpu memory size, pre-requirements, dll
  • Network: leaf spine, backbone technology, bandwith, VLAN, VXLAN, teaming, VPC, link aggregation, distributed switch, vendors, dll
  • Backup and Patching: storage backup, 3rd party backup, VDP, VADP, Update Manager, dll
  • Monitor: items to monitor, centralized log server, performance, capacity, usage, tresshold, alert, placement, dll
  • VM Workloads: user growth, IOps, Tier1/Tier2/Tier3, mission critical, OS clustering, Java/Oracle/SQL Server/SAP, dll
  • Security: firewall ports, virus protection, distributed firewall, hardening system, lockdown mode, access, dll
  • Processes and Operations: SLA agreements, private/public/hybrid strategy, budget/scope constraint, unique processes, dll
  • Availability & DR: RPO, RTO, VMware HA, Fault Tolerance, Active-Active DC. Bandwith and Hops, priority protected VMs, dll

Semoga bermanfaat.

Kind Regards,
Doddi Priyambodo

Key Factors to create Perfect Design for VMware vSphere Infrastructure

If you are doing vSphere Design right now. Please remember this AMPRS rule for your design document.

Always think your design decision based on these key factors. Availability, Manageability, Performance, Recoverability, and Security.

Especially if it is for Business Critical Application, then you MUST consider all these factors.

 

Design Quality Description
Availability Indicates the effect of a design choice on the ability of a technology and the related infrastructure to achieve highly available operation.

Key metrics: percent of uptime.

Manageability Indicates the effect of a design choice on the flexibility of an environment and the ease of operations in its management. Sub-qualities might include scalability and flexibility. Higher ratios are considered better indicators.

Key metrics:

·         Servers per administrator.

·         Clients per IT personnel.

·         Time to deploy new technology.

Performance Indicates the effect of a design choice on the performance of the environment. This does not necessarily reflect the impact on other technologies within the infrastructure.

·         Key metrics:

·         Response time.

·         Throughput.

Recoverability Indicates the effect of a design choice on the ability to recover from an unexpected incident which affects the availability of an environment.

Key metrics:

·         RTO – Recovery time objective.

·         RPO – Recovery point objective.

Security Indicates the ability of a design choice to have a positive or negative impact on overall infrastructure security. Can also indicate whether a quality has an impact on the ability of a business to demonstrate or achieve compliance with certain regulatory policies.

Key metrics:

·         Unauthorized access prevention.

·         Data integrity and confidentiality.

·         Forensic capabilities in case of a compromise.

 

Kind Regards,

Doddi Priyambodo

How to Execute External Guest OS Script from VRO and VRA

These two posts explain the mechanism really well to extend VRA (VMware vRealize Automation) with VRO (VMware vRealize Orchestrator) to execute external script that is located in the External Guest Operating System folders (either Windows or Linux).

It is really useful if you want to execute one of these use cases :
– Silent Installation of Database/Apps platform (ex: SQL Server, Oracle DB, MySQL, PostgreSQL  Apache, etc after the VM is deployed)
– Configure parameters in Apps, DB, Middleware, agents (ex: NetBackup agent, Oracle DB, Tomcat, Weblogic, etc) after the VM is deployed)
– Execute other external scripts that is located in Guest OS

Please note that you also can use it with VRO only, if you don’t want to automate the process from VRA.

Can find the posts from these links :

http://www.vmtocloud.com/how-to-extend-vcac-with-vco-part-1-installation/
http://www.vmtocloud.com/how-to-extend-vcac-with-vco-part-2-hello-world-script-in-guest/

 

Kind Regards,
Doddi Priyambodo