If you think Cloud Computing is only about “Hosting your Server” (which a lot of people do)… then, please kindly read again some public materials out there and create a free account at AWS to try it by your self – since it is way beyond than that!
One of the service that I would like to talk about right now is about the services and platform that are available for Machine Learning purpose – to create an artificial intelligence services for your customers.
At Amazon, artificial intelligence has been investigated for over 20 years. Machine learning (ML) algorithms drive many of our internal systems. It’s also core to the capabilities our customers experience – from the path optimization in our fulfillment centers, and Amazon.com’s recommendations engine, to Echo powered by Alexa, our drone initiative Prime Air, and our new retail experience Amazon Go. This is just the beginning. Our mission is to share our learnings and ML capabilities as fully managed services, and put them into the hands of every developer and data scientist. Machine Learning Application Services – ready to use functions and building blocks for your advanced applications.
Amazon Rekognition makes it easy to add image and video analysis to your applications. You just provide an image or video to the Rekognition API, and the service can identify the objects, people, text, scenes, and activities, as well as detect any inappropriate content. Amazon Rekognition also provides highly accurate facial analysis and facial recognition on images and video that you provide. You can detect, analyze, and compare faces for a wide variety of user verification, people counting, and public safety use cases.
Amazon Lex is a service for building conversational interfaces into any application using voice and text. Amazon Lex provides the advanced deep learning functionalities of automatic speech recognition (ASR) for converting speech to text, and natural language understanding (NLU) to recognize the intent of the text, to enable you to build applications with highly engaging user experiences and lifelike conversational interactions.
Amazon Polly is a service that turns text into lifelike speech, allowing you to create applications that talk, and build entirely new categories of speech-enabled products. Amazon Polly is a Text-to-Speech service that uses advanced deep learning technologies to synthesize speech that sounds like a human voice.
Amazon Comprehend is a natural language processing (NLP) service that uses machine learning to find insights and relationships in text. The service identifies the language of the text; extracts key phrases, places, people, brands, or events; understands how positive or negative the text is; analyzes text using tokenization and parts of speech; and automatically organizes a collection of text files by topic. Using these APIs, you can analyze text and apply the results in a wide range of applications including voice of customer analysis, intelligent document search, and content personalization for web applications.
Amazon Transcribe is an automatic speech recognition (ASR) service that makes it easy for developers to add speech-to-text capability to their applications. Using the Amazon Transcribe API, you can analyze audio files stored in Amazon S3 and have the service return a text file of the transcribed speech.
Amazon Translate is a neural machine translation service that delivers fast, high-quality, and affordable language translation. Neural machine translation is a form of language translation automation that uses deep learning models to deliver more accurate and more natural sounding translation than traditional statistical and rule-based translation algorithms.
Instance for Deep Learning – ready to use EC2 instance pre-installed with popular deep learning frameworks.
AWS Deep Learning AMIs provide machine learning practitioners and researchers with the infrastructure and tools to accelerate deep learning in the cloud, at any scale. You can quickly launch Amazon EC2 instances pre-installed with popular deep learning frameworks such as Apache MXNet and Gluon, TensorFlow, Microsoft Cognitive Toolkit, Caffe, Caffe2, Theano, Torch, PyTorch, Chainer, and Keras to train sophisticated, custom AI models, experiment with new algorithms, or to learn new skills and techniques.
Machine Learning Platform Services – ready to use platform to develop your advanced applications.
Amazon Machine Learning is a service that makes it easy for developers of all skill levels to use machine learning technology. Amazon Machine Learning provides visualization tools and wizards that guide you through the process of creating machine learning (ML) models without having to learn complex ML algorithms and technology. Once your models are ready, Amazon Machine Learning makes it easy to obtain predictions for your application using simple APIs, without having to implement custom prediction generation code, or manage any infrastructure.
Amazon SageMaker is a fully-managed platform that enables developers and data scientists to quickly and easily build, train, and deploy machine learning models at any scale. Amazon SageMaker removes all the barriers that typically slow down developers who want to use machine learning.
AWS DeepLens is deep learning enabled video camera (hardware) for developers. It helps put deep learning in the hands of developers, literally, with a fully programmable video camera, tutorials, code, and pre-trained models designed to expand deep learning skills.
PS: Try it your self to see how easy to start to BUILD your service on top of AWS Cloud Platform (use the FREE account! NOW!) – at amazon we like to say “Get your Hands Dirty!”
This time, saya ingin melanjutkan posting saya sebelumnya yang ada disini mengenai Home Lab. Berikut ini adalah postingan2 saya sebelumnya yang menjelaskan mengenai Home Lab yang saya miliki dan juga beberapa tutorial yg saya coba di Home Lab saya:
Anyway, saya akan menjelaskan beberapa hal mengenai instalasi INTEL-NUC yang saya miliki sebagai Home Lab aktif yang saya gunakan untuk mengoprek VMware products seperti NSX, VIO, VIC, VRNI, dan nantinya PKS.
Saya sangat ingin menggunakan mini server ini sebagai portable mini lab yang bisa dibawa2 untuk memenuhi hobby “ngoprek” saya.
Hobby ini bisa saya salurkan dan dapatkan di INTEL-NUC yang saya pegang saat ini. Beberapa alasan sudah saya jelaskan di postingan saya sebelumnya (baca link diatas, red). Selain instalasi yang telah saya lakukan diatas VMware Workstation on my laptop dan my home PC sebagai nested installation sebelumnya. Berhubung instalasi NSX membutuhkan resource yang cukup besar, jadi I think this would be better to use dedicated hardware untuk melakukan instalasi ini. Inilah salah satu alasan kenapa memilih INTEL-NUC selain melakukan instalasi di laptop saya.
Strategi yang akan kita gunakan adalah membuat INTEL-NUC ini sebagai parent host dari beberapa Nested ESXi yang akan kita gunakan. In summary:
Use Intel NUC as Parent Host = 192.168.106.50
Create beberapa administrasi VMs, seperti NTP, DNS, AD, PSC, vCenter, dll.
Create Nested ESXi sebagai datacenter 1 = 192.168.106.51
Create Nested ESXi sebagai datacenter 2 = 192.168.106.52
Berikut ini adalah capture dari Intel NUC yang akan dikonfigurasi untuk VMware SDDC:
Spesifikasi dari Intel NUC ini sudah diupgrade sampai kapasitas maksimum yg bisa dihandle oleh server ini. Berikut ini adalah screenshot DCUI-nya untuk menggambarkan spesifikasi-nya: (in summary, processor: 4 physical CPU cores with multithread capability, memory:32 GB RAM, disk:480 GB SSD).
Berikut ini adalah spesifikasi detail untuk mini server ini:
Download component dari sini: https://my.vmware.com/group/vmware/get-download?downloadGroup=VSMDS15
Untuk mempercepat proses instalasi dan konfigurasi, karena ini akan digunakan untuk demo & development purpose maka daripada harus satu persatu melakukan instalasi dengan GUI wizard (seperti yang saya lakukan sebelumnya untuk menyiapkan personal lab saya di laptop, please read ….), kita juga bisa menggunakan automation script yang dibuat oleh rekan saya (Wen Bin Tay, Nick Bradford, William Lam) dari VMware.
Berikut ini adalah Step by Step-nya:
Script ini dibuat menggunakan PowerCLI yang merupakan Windows PowerShell interface yang digunakan untuk me-manage VMware vSphere environment (https://blogs.vmware.com/PowerCLI/)
Secara umum, script ini akan men-deploy VMware’s virtualization platform termasuk vCenter Server Appliance (VCSA), Nested ESXi, NSX components dan contoh aplikasiThree-Tier Web Application. Tapi perlu diingat, bahwa instalasi menggunakan automated script Nested ESXi ini hanya direkomendasikan di environment Development saja. Tidak direkomendasikan untuk dipasang di environment production.
Virtual Machines yang ada di Parent Host:
All IP Address Overview:
Following our tutorial, now we will continue to do the installation and configuration for those components.
So, rephrasing previous blog post. By utilising vSphere Integrated Containers, now Developers can use their docker commands to manage the development environments, also functionalities are enriched with specific container management portal (VMware Admiral) and enterprise features container registry (VMware Harbor). System administrator can still use their favourite management tool to manage the infrastructure, such as vCenter and also vRealize Operations plus Log Insight to manage the virtual infrastructure in a whole holistic view. Shown in the diagram below:
A traditional container environment use the host/server to handle several containers. Docker has the ability to import images into the host, but the resource is tied to that host. The challenge is sometime that host has a very limited set of resources. To expand resource on that host, then we need to shutdown the host and then the containers. Then we need to add resource for that physical/virtual machine before more containers can be powered deployed. Another challenge is the container is not portable as it can not be moved to another host since it is very tight to the OS kernel of the container host.
Another concerns other than resources, already explained in my earlier post regarding some enterprise features if we would like to run docker in production environment such as security, manageability, availability, diagnosis and monitoring, high availability, disaster recovery, etc. VIC (vSphere Integrated Containers) can give the solution for all those concerns by using resource pool as the container host and virtual machines as the containers. Plus with new features of vSphere 6 about Instant Clone now VIC can deliver “instant on” container experience alongside the security, portability, and isolation of Virtual Machine. Adding extra hosts in the resource pool to dynamically increase infra resources, initiate live migration/vMotion, auto placement/Distributed Resource Scheduler, dedicated placement/affinity, self healing/High Availability, QoS/weight, quota/limit, guarantee/reservation, etc will add a lot of benefits to the docker environment.
So, these are our steps to prepare the environments for vSphere Integrated Containers (VIC).
Installation and configuration of vSphere Integrated Containers
Installation and configuration of Harbor
Installation and configuration of Admiral
So, let’s start the tutorial now.
Checking the Virtual Infrastructure Environments
I am running my virtualisation infrastructure in my Mac laptop using VMware Fusion Professional 8.5.1.
Currently I am using vSphere ESXi Enterprise Plus version 6 update 2, and vCenter Standard version 6 update 2.
I have NFS storage as my centralised storage, NTP, DNS and DHCP also configured in another VM.
Installation of vSphere Integrated Containers (VIC)
There are two approach to install VIC. This is the first one: (I use this to install on my laptop)
Download that binary to the Virtual Machine that you will be used for VIC Management Host.
Extract the file using = $ tar -zxvf vic_6511.tar.gz. NOTE:You will see the latest build as shown here. The build number “6511” will be different as this is an active project and new builds are uploaded constantly.
Okay, you already installed the installer now. In those steps above, there are three primary components generated by a full build, found in the ./bin directory by defaul). The make targets used are the following:
vic-machine – make vic-machine
appliance.iso – make appliance
bootstrap.iso – make bootstrap
Okay, after this we will Deploy our Virtual Container Host in VMware environments (I am using vCenter with ESXi as explained earlier). The installation can run on dedicated ESXi host too (without vCenter) if needed.
Now, continue to create the Virtual Container Host in the vCenter. Since I am using Mac, I will use command prompt for mac. $ ./vic-machine-darwin create --target 172.16.159.150/dc1.lab.bicarait.com --compute-resource cls01.dc01.lab.bicarait.com --user email@example.com --password VMware1! --image-store ds_fusion_01 --no-tlsverify --name virtualcontainerhost01 --bridge-network dvPgContainer01 --force
After that command above, let’s check the condition of our virtual infrastructure from vCenter now. Currently we will see that we have a new resource pool as the virtual container host, and a vm as an endpoint vm as a target of the container host. Okay, installation is completed. Let’s try to deploy a docker machine into the VIC now. $ docker -H 172.16.159.153:2376 --tls info
After that, let’s do the pull and run command for the docker as normal operation same as my previous posts. $ docker -H 172.16.159.153:2376 --tls \
--tlskey='./docker-appliance-key.pem' pull vmwarecna/nginx $ docker -H 172.16.159.153:2376 --tls \
--tlskey='./docker-appliance-key.pem' run -d -p 80:80 vmwarecna/nginx
Note: for production, we must use the *.pem key to connect to the environment. Since this is my development environment, so I will skip that.
Okay, now finally… this is a video to explain the operational of vSphere Integrated Container, VMware Admiral, and VMware Harbor (I already explained about Admiral and Harbor in my previous blog post in here):
In this tutorial, after explaining about running Docker in my Mac. Now, it’s time to move those dockers on your laptop to production environment. In VMware, we will utilise vSphere ESXi as the production grade virtualisation technology as the foundation of the infrastructure.
In production environment, lot of things need to be considered. From availability, manageability, performance, reliability, scalability, security (AMPRSS). This AMPRSS considerations can be easily achieved by implementing docker container from your development environment (laptop) to the production environment (vSphere ESXi). One of the concern of docker technology is the containers share the same kernel and are therefore less isolated than real VMs. A bug in the kernel affects every container.
vSphere Integrated Containers Engine will allow developers familiar with Docker to develop in containers and deploy them alongside traditional VM-based workloads on vSphere clusters, and allowing for these workloads to be managed through the vSphere UI in a way familiar to existing vSphere admins.
Docker itself is far less capable than actual hypervisor. It doesn’t come with HA, live migration, hardware virtualization security, etc. VIC (VMware Integrated Containers) brings the container paradigm directly to the hypervisor, allowing you to deploy containers as first-class citizens. The net result is that containers inherit all of the benefits of VMs, because they are VMs. The Docker image, once instantiated, becomes a VM inside vSphere. This solves security as well as operational concerns at the same time.
But these are NOT traditional VMs that require for example 2TB and take 2 minutes to boot. These are usually as big as the Docker image itself and take a few seconds to instantiate. They boot from a minimal ISO which contains a stripped-out Linux kernel (based on Photon OS), and the container images and volumes are attached as disks.
The ContainerVMs are provisioned into a “Virtual Container Host” which is just like a Swarm cluster, but implemented as logical distributed capacity in a vSphere Resource Pool. You don’t need to add or remove physical nodes to increase or decrease the VCH capacity, you simply re-configure its resource limits and let vSphere clustering and DRS (Distributed Resource Scheduler) handle the details.
The biggest benefit of VIC is that it helps to draw a clear line between the infrastructure provider (IT admin) and the consumer (developer/ops). The consumer wins because they don’t have deal with managing container hosts, patching, configuring, etc. The provider wins because they can leverage the operational model they are already using today (including NSX and VSAN).
Developers will continue to develop dockers and IT admin will keep managing VMs. The best of both worlds.
It also can be combined with other enterprise tool to manage the Enterprise environment, such as vRealize Operations, vRealize Log Insight, Virtual SAN, VMware NSX, vRealize Automations.
In this post, I will utilise these technologies from VMware:
vSphere ESXi 6 U2 as the number one, well-known and stable production grade Virtualisation Technology.
vCenter 6 U2 as the Virtualisation central management and operation tool.
vSphere Integrated Containers as the Enterprise Production Ready container runtime for vSphere, allowing developers familiar with Docker to develop in containers and deploy them alongside traditional VM-based workloads on vSphere clusters. Download from here: The vSphere Integrated Containers Engine
VMware Admiral as the Container Management platform for deploying and managing container based applications. Provides a UI for developers and app teams to provision and manage containers, including retrieving stats and info about container instances. Cloud administrators will be able to manage container hosts and apply governance to its usage, including capacity quotas and approval workflows. Download from here: Harbor
VMware Harbor as an enterprise-class registry server that stores and distributes Docker images. Have a UI and functionalities usually required by an enterprise, such as security, identity, replication, and management. Download from here: Admiral
This is the diagram block for those components:
As you can see in the diagram above vSphere Integrated Containers is comprised of three main components, all of which are available as open source on github. With these three capabilities, vSphere Integrated Containers will enable VMware customers to deliver a production-ready container solution to their developers and app teams.
As previous post, I will elaborate about Cloud Native Applications. But before that, I will post some basic concepts about Docker as the Container technology for Cloud Native Applications approach.
Docker is an open platform for developing, shipping, and running applications. Docker enables you to separate your applications from your infrastructure so you can deliver software quickly. With Docker, you can manage your infrastructure in the same ways you manage your applications. By taking advantage of Docker’s methodologies for shipping, testing, and deploying code quickly, you can significantly reduce the delay between writing code and running it in production.
If you are still confused about description of Docker, Microservices, Cloud Native Applications technology means. You can read it in here: http://bicarait.com/2016/05/31/microservices-cloud-native-applications/
In this post, I will start with the basic on how to run your first application in Docker that will be provisioned in your Mac laptop. Then, I will do that also in vSphere Integrated Container and also VMware Photon Platform.
Actually there are two approach to run docker on your Mac. The 1st one is to utilise Docker for Mac (which we will do this), and the second one is to utilise Docker Toolbox. The difference is in Docker for Mac approach, we will utilise HyperKit as lightweight virtualisation technology to run the container. Docker Toolbox will utilise Virtualbox as the virtualisation technology.
Actually you can run both Docker for Mac and Docker Toolbox approach at the same time in your MacOS, but there are several things that you need to do, such as create different environment (set and unset command). I will not elaborate that in this post.
Assume that your machine is empty for Docker engine.
Install and Run Docker. Double click Docker.img that you have downloaded earlier to start the installation.
Check Docker version that is now running on your Mac after the installation is completed.
Let’s start with your basic application. Let’s do nginx web server using docker.
Check your http://localhost first to check the status.
Basically, docker will try to run the source of your application locally. But if docker can not find it, then it will search through the public repository (default configuration is docker hub).
Check your http://localhost now to check the status.
Check the status of the docker using docker ps command. If you want to stop the web server, do docker stop webserver and start the web by docker start webserver
If you want to stop and remove the container, use the command docker rm -f webserver. If you want to delete the local images do the command docker rmi nginx. But before that, you can list the local images using docker images.
If you want to use another docker repository other than https://hub.docker.com or do a file sharing from your Mac to your docker engine, you can also configure that in the Docker for Mac menu.
Let’s Continue with the second Chapter: BOARDING YOUR APPS
For this example we will utilise Docker Compose to run WordPress in an isolated environment. Compose is a docker tool for running multi containers environment. We will create a compose file, and then execute the YAML file using docker-compose command.
Create a directory for the project in your Mac.
Create a docker compose file. This will include wordpress and mysql to create a simple blog website.
Now, build the project using the command $ docker-compose up -d
Check whether the images already installed and run. Using docker images and docker ps command.
Finally, test to open the wordpress in your browser. Because we put the configuration in port 8000, then we will open http://localhost:8000
Do the installations of wordpress using the UI wizard, then finally open the created site.
Cloud Native Applications implementation using container technology is hardly to ignore if you want to keep up with this culture of agile and fast innovations. VMware have two approaches to support for this initiative. Either to use vSphere Integrated Container approach or VMware Photon Platform approach.
So, what are the differences? In Summary:
If you want to run both containerized and traditional workloads in production side by side on your existing infrastructure, VIC is the ideal choice. VIC extends all the enterprise capabilities of vSphere without requiring additional investment in retooling or re-architecting your existing infrastructure.
If you are looking at building an on-prem, green field infrastructure stack for only running containerized workloads, and also would like a highly available and scalable control plane, an API-driven, automated DevOps environment, plus multi-tenancy for creation and isolation resources, Photon Platform is the way to go.
In this couple of weeks, I will elaborate more about this cloud native applications. Please wait for my next posts.
So, these are the plan:
1. Run Docker Apps in the laptop (for my case, I will use Mac)
We will utilise: Mac OS, Docker, Swarm.
2. Run Docker Apps in vSphere Integrated Container
We will utilise: VMware vSphere, vCenter, Photon OS, Harbor, Admiral.
3. Run Docker Apps in VMware Photon Platform
We will utilise: VMware vSphere, Photon Controller, Photon OS, Kubernetes
Saya coba merangkum sekilas saja mengenai beberapa pertanyaan teknis dasar yang biasa diajukan saat kita melakukan Requirement Analysis / Design Workshopengagement dengan customer.
Berikut ini adalah beberapa high level questions yang biasa saya ajukan, dan melakukan penggalian lebih dalam berdasarkan pertanyaan tersebut. (Note: ini adalah pertanyaan2 teknis, jadi bukan diajukan ke business person or C level. So, to find the correct audience is important)
Compute: To gather information regarding the planned target Compute infrastructure
Storage: To understand the current and expected storage landscape
vCenter: To describe the state of vCenter to manage the ESXi environment
Network: To gather information around current and target network infrastructure
Backup & Patching: To understand the current backup and patching methodology.
Monitor: To analyze current and expected the Monitoring processes
VM Workloads: To analyzie the details of the current physical workloads to be virtualized and consolidated
Security: To understand detail the current security practices.
Processes & Operations: To understand the current operation procedures and processes
Availlaibility & Disaster Recovery: to gather information on Business Continuity Processes
Breakdown lebih detail dari pertanyaan tersebut diatas, bisa saja dilakukan lebih detail, contohnya sebagai berikut:
If you are doing vSphere Design right now. Please remember this AMPRS rule for your design document.
Always think your design decision based on these key factors. Availability, Manageability, Performance, Recoverability, and Security.
Especially if it is for Business Critical Application, then you MUST consider all these factors.
Indicates the effect of a design choice on the ability of a technology and the related infrastructure to achieve highly available operation.
Key metrics: percent of uptime.
Indicates the effect of a design choice on the flexibility of an environment and the ease of operations in its management. Sub-qualities might include scalability and flexibility. Higher ratios are considered better indicators.
· Servers per administrator.
· Clients per IT personnel.
· Time to deploy new technology.
Indicates the effect of a design choice on the performance of the environment. This does not necessarily reflect the impact on other technologies within the infrastructure.
· Key metrics:
· Response time.
Indicates the effect of a design choice on the ability to recover from an unexpected incident which affects the availability of an environment.
· RTO – Recovery time objective.
· RPO – Recovery point objective.
Indicates the ability of a design choice to have a positive or negative impact on overall infrastructure security. Can also indicate whether a quality has an impact on the ability of a business to demonstrate or achieve compliance with certain regulatory policies.
· Unauthorized access prevention.
· Data integrity and confidentiality.
· Forensic capabilities in case of a compromise.
These two posts explain the mechanism really well to extend VRA (VMware vRealize Automation) with VRO (VMware vRealize Orchestrator)to execute external script that is located in the External Guest Operating System folders (either Windows or Linux).
It is really useful if you want to execute one of these use cases :
– Silent Installation of Database/Apps platform (ex: SQL Server, Oracle DB, MySQL, PostgreSQL Apache, etc after the VM is deployed)
– Configure parameters in Apps, DB, Middleware, agents (ex: NetBackup agent, Oracle DB, Tomcat, Weblogic, etc) after the VM is deployed)
– Execute other external scripts that is located in Guest OS
Please note that you also can use it with VRO only, if you don’t want to automate the process from VRA.
Can find the posts from these links : http://www.vmtocloud.com/how-to-extend-vcac-with-vco-part-1-installation/ http://www.vmtocloud.com/how-to-extend-vcac-with-vco-part-2-hello-world-script-in-guest/
Hands-on Lab for Management products
Technical blogs by VMware or customers
o http://sflanders.net/ is world #1 blog for Log Insight. Steven is the Product Architect for Log Insight.
o http://virtual10.com/ by Manny Sidhu, a Virtualization architect working for a global bank.
o http://vxpresss.blogspot.sg/ by Sunny Dua, VMware PSO Consultant and CTO Ambassador.
o http://virtual-red-dot.info by Iwan Rahabok, VMware SE and CTO Ambassador.
Walaupun saat ini sudah tahun 2015, dari pengalaman interaksi saya dengan teman-teman lainnya; ternyata masih ada beberapa IT profesional yang menanyakan apa itu “Virtualisasi” dan ujungnya nanti ke pertanyaan apa itu “Cloud Computing”? Dan pertanyaan yang paling mendasar: “Apa untungnya bagi perusahaan untuk mengimplementasikan dua hal tersebut?”
Jika kita tanya ke beberapa orang, browsing ke beberapa site, kemungkinan jawaban akan bermacam-macam dengan beberapa definisi yang masing-masing pasti ada benarnya juga. Tapi prinsip jawabannya kemungkinan adalah sama. Menurut saya, definisi virtualisasi dan cloud computing adalah sebagai berikut :
Saya mendefinisikan Virtualisasisebagai abstraction/pemecahan dari sebuah computing resource dari computing resource lainnya. Yup, se-simple itu (lihat gambar di samping). Contoh: server virtualization maksudnya kita mengabstraksi/memecah operating system dari sebuah server.
Saat ini dengan adanya teknologi virtualisasi, perusahaan dapat menjalankan beberapa operating system dan beberapa aplikasi diatas hardware milik mereka saat ini, dan kebutuhan pembelian hardware baru hanya benar-benar dilakukan jija kapasitasnya memang membutuhkan untuk itu. Sudah tidak jamannya lagi perusahaan membeli server baru jika ada aplikasi baru yang harus di-deploy.
Dengan melakukan penumpukan workloads bersama-sama menggunakan teknologi virtualisasi, maka perusahaan bisa mendapatkan value yang lebih besar dari investasi hardware yang dilakukan oleh mereka. Selain dapat mengurangi biaya pembelian hardware baru (CAPEX/Capital Expenditure). Perusahaan kini juga dapat mengurangi biaya operasional (OPEX/Operational Expenditure) karena jumlah server dan jumlah hardware lainnya (ex: storage, router, etc) yang berkurang secara drastis di datacenter; dan akhirnya juga berefek ke penggunaan listrik, penggunaan pendingin ruangan, atau besarnya datacenter yang dibutuhkan. Pengurangan operational cost akan sangat signifikan. Cara ini yang biasanya disebut sebagai mekanisme “konsolidasi” resources.
Selain manfaat konsolidasidiatas, jika kita menerapkan teknologi virtualisasi, maka perusahaan juga dapat merasakan kenikmatan meningkatnya kualitas “uptime/high-availability” dari layanan anda, mekanisme disaster-recoveryyang jauh lebih terencana, mekanisme monitoring assetanda yang lebih terintegrasi, pembuatan beberapa mekanisme otomasiuntuk pembuatan server/layanan lain, dan belum lagi peningkatan securityyang jauh lebih meningkat. Ujungnya, virtualisasi akan dapat membuat pondasi untuk mencapai konsep “Cloud Computing“. Cloud Computingsendiri adalah sebuah konsep yang dibangun diatas filosofi dari akses networkyang sangat luas, memiliki konsep resource pooling (pengumpulan resouce), memiliki kemampuan untuk memberikan layanan yang langsung bisa diakses oleh penggunanya(bukan administrator) secara langsung, layanan yang bisa diukur kualitasnya (dan bisa juga dikenai biaya berdasarkan itu), dan layanan yang sangat elastisuntuk dapat mengikuti kebutuhan pengguna dengan cepat (menambah atau mengurangi resouce dengan cepat).
Implementasi teknologi virtualisasi adalah pondasi yang akan dapat membentuk konsep operasional model baru tersebut (“cloud computing”) dengan jauh lebih efektif dan efisien.
Once again, I am taking this article from another website (http://www.infoworld.com/d/data-center/review-puppet-vs-chef-vs-ansible-vs-salt-231308). It is a very good article that I would like to remember. So, that is the reason why I re-post it again in my blog.
Review: Puppet vs. Chef vs. Ansible vs. Salt
The leading configuration management and orchestration tools take different paths to server automation
The proliferation of virtualization coupled with the increasing power of industry-standard servers and the availability of cloud computing has led to a significant uptick in the number of servers that need to be managed within and without an organization. Where we once made do with racks of physical servers that we could access in the data center down the hall, we now have to manage many more servers that could be spread all over the globe.
This is where data center orchestration and configuration management tools come into play. In many cases, we’re managing groups of identical servers, running identical applications and services. They’re deployed on virtualization frameworks within the organization, or they’re running as cloud or hosted instances in remote data centers. In some cases, we may be talking about large installations that exist only to support very large applications or large installations that support myriad smaller services. In either case, the ability to wave a wand and cause them all to bend to the will of the admin cannot be discounted. It’s the only way to manage these large and growing infrastructures. [ Read the individual reviews: Puppet • Chef • Ansible • Salt | Puppet or Chef: The configuration management dilemma | Subscribe to InfoWorld’s Data Center newsletter to stay on top of the latest developments. ] Puppet, Chef, Ansible, and Salt were all built with that very goal in mind: to make it much easier to configure and maintain dozens, hundreds, or even thousands of servers. That’s not to say that smaller shops won’t benefit from these tools, as automation and orchestration generally make life easier in an infrastructure of any size.
I looked at each of these four tools in depth, explored their design and function, and determined that, while some scored higher than others, there’s a place for each to fit in, depending on the goals of the deployment. Here, I summarize my findings. Puppet Enterprise
Puppet arguably enjoys the biggest mind share of the four. It’s the most complete in terms of available actions, modules, and user interfaces. Puppet represents the whole picture of data center orchestration, encompassing just about every operating system and offering deep tools for the main OSes. Initial setup is relatively simple, requiring the installation of a master server and client agents on each system that is to be managed.
From there, the CLI (command-line interface) is straightforward, allowing module downloads and installation via the puppet command. Then, changes to the configuration files are required to tailor the module for the required task, and the clients that should receive the instructions will do so when they check in with the master or via a push that will trigger the modifications immediately.
There are also modules that can provision and configure cloud server instances and virtual server instances. All modules and configurations are built with a Puppet-specific language based on Ruby, or Ruby itself, and thus will require programmatic expertise in addition to system administration skills.
Test Center Scorecard
AnsibleWorks Ansible 1.3
Enterprise Chef 11.4
Puppet Enterprise 3.0
SaltStack Enterprise 0.17.0
Puppet Enterprise has the most complete Web UI of the bunch, allowing for real-time control of managed nodes using prebuilt modules and cookbooks present on the master servers. The Web UI works well for management, but does not allow for much configuration of modules. The reporting tools are well developed, providing deep details on how agents are behaving and what changes have been made.
Chef is similar to Puppet in terms of overall concept, in that there’s a master server and agents installed on managed nodes, but it differs in actual deployment. In addition to a master server, a Chef installation also requires a workstation to control the master. The agents can be installed from the workstation using the knife tool that uses SSH for deployment, easing the installation burden. Thereafter, managed nodes authenticate with the master through the use of certificates.
I keep hearing stories from Customers and Prospects where Oracle appears to be trying to deceive them for the purposes of extorting more license money from them than they are legally required to pay. I also keep hearing stories of Oracle telling them they would not be supported if they virtualized their Oracle systems on VMware vSphere. This has gone on now for far too long and it’s time to fight back and stop the FUD (Fear, Uncertainty, Doubt)!
In my opinion the best way for you to prevent this situation for your company is by knowing the right questions to ask, and by knowing what your obligations are. The aim for this article is to give you the tools to pay only what you legally owe, while making the most efficient and economic use of your licenses, and get the world class support that you are used to, even in a virtualized environment on VMware vSphere. All without sacrificing availability or performance.
I’m going to start this article by quoting Dave Welch, CTO, House of Brick – “I believe in paying every penny I owe. However, beyond that, it is my discretion to who or what I donate and in what amount. I have no patience with individuals or entities that premeditate the creation of OLSA compliance issues. I similarly have no patience with the knowing spreading of FUD by some professionals in what could be construed as extortion of funds beyond customers’ executed contractual obligations. I will continue to vigorously promote and defend the legal rights of both software vendors and their customers even if that means I induce accelerated hair loss through rapid, frequent hat swapping.” Source Jeff Browning‘s EMC Communities article – Comments by Dave Welch of House of Brick on Oracle on VMware Licensing.
I agree with Dave on this. So I am going to show you how you can pay what you owe, while using what you pay for as efficiently and cost effectively as possible, and show you how you can still enjoy the full support you are entitled to. Without the scaremongering that sometimes accompanies discussions with Oracle Sales Reps.
For those that aren’t familiar with the term FUD, it is an acronym which stands for Fear, Uncertainty and Doubt. Something some companies and professionals seem to go to great lengths to create in the minds of customers. FUD #1 – Oracle Licensing and Soft Partitioning
Oracle’s Server/Hardware Partitioning document outlines the different types of partitioning and how they impact licensing. Oracle may try and tell you that licensing a VMware environment will be more expensive as they don’t consider VMware Hard Partitioning. This is complete rubbish. This assertion is completely irrelevant unless you were only planning on deploying a single small database on a very small subset of a very large server. In this case you probably wouldn’t be using Enterprise Edition and may not be paying per CPU Core (Named User Plus instead). Why would you deploy such a system when you could easily purchase a server that is the right size for the job and licensed appropriately for the job? There is absolutely no requirement to run Oracle Enterprise Edition just because you are virtualizing your databases.
There is absolutely no increase in licensing costs over and above what you would have to pay for the same physical infrastructure to run your Oracle Database if you were running it in the OS without virtualization. You still have to pay what you owe, for what you use. The truth is that your costs could actually be significantly less when virtualizing on VMware vSphere as you can get more productive work done for the same amount of physical hardware, and therefore the license requirements and your costs will be significantly less. This is because you can run multiple Oracle databases on the same server and effectively share the resources, including memory, provided you take care during your design to ensure any undesirable performance impacts are avoided. Take this image for example showing consolidating two dissimilar workloads on the same hardware (Source: VMware). Continue reading Fight the FUD – Oracle Licensing and Support on VMware vSphere→
Berikut ini, saya mengutip resource yang sangat baik dari http://vcdx133.com mengenai persiapan untuk mengambil sertifikasi VCDX. Sertifikasi tertinggi dari VMware Inc.
Once you have finished your design documents, you then have to prepare the supporting documentation. The length of this task is very often underestimated by aspiring candidates, who either rush the job to make the submission and possibly have their application rejected due to incompleteness or miss their submission date.
The term “Supporting Documentation” refers to the following documents/sections from the VCDX Blueprint:
Standard Operating Procedures
Assuming you have validated that your design meets your Customer’s Requirements, you now have to ensure that your supporting documentation will allow the design to be implemented, configured, tested and operated successfully. This is where we separate the men from the boys, so much effort has gone into the design process, you then need to marshal your resources and push through the creation of the supporting documentation.
The strategy is very simple, create a linked list that maps from each physical design decision to the Implementation Plan, Configuration Guide, Test Plan and the SOPs. This is summarised in the diagram below:
It is very important to maintain a unique numbering system throughout all of your documentation; this will allow you to quickly verify that all components and scenarios are covered and then collate them into a matrix to ensure that nothing is missed. For example: supporting_documentation_template and supporting_documentation_matrix.