Hyperconverge Battle Blogs Recap – Performance Test

Just a recap, these are some public materials, regarding VMware Virtual SAN vs competitor as Hyperconverge battle continues.

VSAN vs Nutanix Head-to-Head Performance Testing — Part 1

https://blogs.vmware.com/storage/2015/06/03/vsan-vs-nutanix-head-head-performance-testing-part-1/

VSAN vs Nutanix Head-to-Head Performance Testing — Part 2

https://blogs.vmware.com/storage/2015/06/10/vsan-vs-nutanix-head-head-performance-testing-part-2/

VSAN vs Nutanix Head-to-Head Performance Testing — Part 3

https://blogs.vmware.com/storage/2015/06/12/vsan-vs-nutanix-head-head-testing-part-3/

VSAN vs. Nutanix — Head-to-Head Performance Testing — Part 4 — Exchange!

https://blogs.vmware.com/storage/2015/07/06/vsan-vs-nutanix-head-head-performance-testing-part-4-exchange/

VSAN and The Joys Of Head-to-Head Performance Testing

http://blogs.vmware.com/storage/2015/06/29/vsan-joys-head-head-performance-testing/

http://blogs.vmware.com/virtualblocks/2015/06/21/vmware-vsan-vs-nutanix-head-to-head-pricing-comparison-why-pay-more/ 

Virtual SAN 6.0 Performance with VMware VMmark

http://blogs.vmware.com/performance/2015/04/virtual-san-6-0-performance-vmware-vmmark.html

StorageReview.com:

VMware Virtual SAN Review: Overview and Configuration

VMware Virtual SAN Review: VMmark Performance

VMware Virtual SAN Review: Sysbench OLTP Performance

VMware Virtual SAN Review: SQL Server Performance

Why We Don’t Have a Nutanix NX-8150 Review

Other Blogs:

http://www.theregister.co.uk/2015/08/07/nutanix_digs_itself_into_hole_and_refuses_to_drop_the_shovel/

http://hansdeleenheer.com/when-bad-press-really-is-bad-press/

https://lonesysadmin.net/2015/08/07/three-thoughts-on-the-nutanix-storagereview-situation/

 

Btw, do you realise that there is an EULA for one of the competitor, stated that:

Use.
2.1. Limitations on Use.

You must not use the Software or Documentation except as permitted by this Agreement. You must not:

  1. disclose the results of testing, benchmarking or other performance or evaluation information related to the Software or the product to any third party without the prior written consent of Nutanix;
  2. access or use the Software or Documentation for any competitive purposes (e.g. to gain competitive intelligence; to design or build a competitive product or service, or a product providing features, functions or graphics similar to those used or provided by Nutanix; to copy any features, functions or graphics; or to monitor availability, performance or functionality for competitive purposes);

Men!!! Talk about transparency… How can we measure the competitiveness then? referring that EULA.

 

Kind Regards,
Doddi Priyambodo

VSAN Erasure Coding – Storage Based Policy Management

A new policy setting has been introduced to accommodate the new RAID-5/RAID-6 configurations in VSAN (only available in All-Flash configuration). Minimum 4 hosts will be required for RAID5, and minimum 6 hosts will be required for RAID6 configuration.

This new policy setting is called Failure Tolerance Method. This policy setting takes two values: performance and capacity. When it is left at the default value of performance, objects continue to be deployed with a RAID-1/mirror configuration for the best performance. When the setting is changed to capacity, objects are now deployed with either a RAID-5 or RAID-6 configuration.

The RAID-5 or RAID-6 configuration is determined by the number of failures to tolerate setting. If this is set to 1, the configuration is RAID-5. If this is set to 2, then the configuration is a RAID-6.

 

Kind Regards,
Doddi Priyambodo

What is Hadoop? Why do we need to virtualize it using VMware?

What is Hadoop?

Hadoop is an Apache open source project that provides scalable and distributed computing, originally built by Yahoo!. It provides a framework that can process large amounts of data by leveraging the parallel and distributed processing of many compute nodes arrayed in a cluster. These clusters can be configured as a single host or scaled up to utilize thousands of machines depending on the workload.

What are Hadoop Components?

These are the core modules of Hadoop, which build the capabilities to conduct distributed computing capabilities.

  • Hadoop Common – The utilities that support the other Hadoop modules.
  • Hadoop Distributed File System – The distributed file system used by most Hadoop distributions . Also known by its initials, HDFS.
  • Hadoop YARN – Used to manage cluster resources and schedule jobs.
  • Hadoop Map Reduce – YARN based system of processing large amounts of data.

In addition to the core modules, there are others that provide specific and specialized capabilities to this distributed processing framework. These are just some of the tools:

  • Ambari – A web-based tool for provisioning, management, and monitoring of Hadoop clusters.
  • HBase – Distributed database that supports structured data storage.
  • Hive – Data warehouse model with data summarization and ad hoc query capability.
  • Pig – Data flow language.
  • ZooKeeper – Coordination service for distributed applications.

These are modules available from the Apache open-source project, but there are also more than thirty  companies that provide Hadoop distributions that include the open-source code as well as adding competing management solutions, processing engines, and many other features.  Some of the best known and widest used are distributed from Cloudera, MapR, and Hortonworks.

Why do we need to Virtualize Hadoop workloads?

Now, after we know about Hadoop. We always discuss about virtualization in this blog. Is hadoop suitable to be virtualized? Yes, if you would like to have these additional values for Hadoop. Then you should consider to virtualize the workload.

  • Better resource utilization:
    Collocating virtual machines containing Hadoop roles with virtual machines containing different workloads on the same set of VMware ESXi™ server hosts can balance the use of the system. This leads to lower operating expenses and lower capital expenses as you can leverage the existing infrastructure and skills in the data center and you do not have to invest in bare-metal servers for your Hadoop deployment.
  • Alternative storage options:
    Originally, Hadoop was developed with local storage in mind, and this type of storage scheme can be used with vSphere also. The shared storage that is frequently used as a basis for vSphere can also be leveraged for Hadoop workloads. This re-enforces leveraging the existing investment in storage technologies for greater efficiencies in the enterprise.
  • Isolation:
    This includes running different versions of Hadoop itself on the same cluster or running Hadoop alongside other applications, forming an elastic environment, or different Hadoop tenant. Isolation can reduce your overall security risk, ensure you are meeting your SLA’s, and support Hadoop as a service back to the lines of business.
  • Availability and fault tolerance:
    The NameNode, the Resource Manager and other Hadoop components, such as Hive Metastore and HCatalog, can be single points of failure in a system. vSphere services such as VMware vSphere High Availability (vSphere HA) and VMware vSphere Fault Tolerance (vSphere FT) can protect these components from server failure and improve availability.
  • Balance the loads:
    Resource management tools such as VMware vSphere vMotion® and VMware vSphere Distributed Resource Scheduler™ (vSphere DRS) can provide availability during planned maintenance and can be used to balance the load across the vSphere cluster.
  • Business critical applications:
    Uptime consideration is just as important in a Hadoop environment, why would the enterprise want to go back in time to a place where the servers and server components were single points of failure. Leverage the existing investment in vSphere to drive meeting SLA’s and providing an excellent service back to the business.

VMware also have the component called VMWARE BIG DATA EXTENSIONS (https://www.vmware.com/products/big-data-extensions), to rapidly deploy High Available Hadoop components and easily manage the infrastructure workloads

vSphere Big Data Extensions enables rapid deployment, management, and scalability of Hadoop in virtual and cloud environments. It also has the functionality to do scale in/out capabilities built into Big Data Extensions tools enables on-demand Hadoop instances.

Simple cloning to sophisticated end-user provisioning products such as VMware vRealize Automation™ can speed up the deployment of Hadoop. This enables IT to be a service provider back to the business and provide Hadoop as a service back to the different lines of business, providing faster time to market. This will further enable today’s IT to be a value driver vs. seen as a cost center.

For more detail about VMware Big Data Extensions, please see this datasheet from VMware Inc. = https://www.vmware.com/files/pdf/products/vsphere/VMware-vSphere-Big-Data-Extensions-Datasheet.pdf

 

Kind Regards,
Doddi Priyambodo

Installation and Documentation Guide for VMware SDDC Proof of Concept

POC Installation and Documentation generally available online both in VMware website and in different blogs, but these are some recommendations:

Google.com and VMware.com of course…

 

Kind Regards,
Doddi Priyambodo

MICROSERVICES – What is Cloud Native Application?

DevOps, Containers, Docker, Mesos, Kubernetes, microservices, 12-factor applications, 3rd platform, oh my!   Modern application architecture and lifecycle is changing fast and that means even more demands on IT.  While some have argued that this new application approach calls for a whole new infrastructure,  actually these new business-driven demands head on, leveraging your existing investment while still delivering the highest SLAs – performance, availability, security, compliance, and disaster recovery.  This emerging 3rd Platform Application stack not only fits into existing SDDC infrastructure investments but is actually the best place to run containers and emerging 3rd platform applications.

Application Development and Delivery

 

If we look at the Outcomes Delivered from a new model of IT, Businesses are increasing their focus on App and Infrastructure Delivery Automation throughout the datacenter.

3RD PLATFORM – MICROSERVICES

3rd Platform! Microservices! What the heck are they? Put simply, the 3rd platform is this is a new paradigm for architecting applications to operate in a distributed fashion. While the 1st platform was designed around mainframes and the 2nd platform was designed around client-server, the 3rd platform is designed around the cloud. In other words, applications are designed and built to live in the cloud. We can effectively think of this as pushing many of the core infrastructure concepts (like availability and scale) into the architecture of the application itself with containers being a large part of this; they can be thought of as lightweight runtimes for these applications. With proper application architecture and a rock solid foundation either on-premise or in the cloud, applications can scale on demand, new versions can be pushed quickly, components can be rebuilt and replaced easily, as well as many other benefits discussed below.

History of Platforms

1st Platform systems were based around mainframes and traditional servers without virtualization. Consolidation was a serious issue and it was normal to run one application per physical server.

2nd Platform architectures have been the standard mode for quite a while. This is the traditional Client/Server/Database model with which you are likely very familiar, leveraging the virtualization of x86 hardware to increase consolidation ratios, add high availability and extremely flexible and powerful management of workloads.

3rd Platform moves up the stack, standardizing on Linux Operating Systems primarily, which allows developers to focus on the application exclusively. Portability, scalability and highly dynamic environments are valued highly in this space. We will focus on this for the rest of the module.

Does this mean you should immediately move all of your applications to this model? Not so fast! While 3rd Platform architectures are exciting and extremely useful, they will not be the answer for everyone. A thorough understanding of the benefits and, more importantly the complexities in this new world are extraordinarily important. VMware’s Cloud-Native Apps group is dedicated to ensuring our customers are well informed in this space and can adopt this technology confidently and securely when the time is right.

Microservices are growing in popularity, due in no small part to companies like Netflix and Paypal that have embraced this relatively new model. When we consider microservices, we need to understand both the benefits and the limitations inherent in the model, as well as ensure we fully understand the business drivers.

At its heart, microservice architecture is about doing one thing and doing it well. Each microservice has one job. This is clearly in stark contrast to the monolithic applications many of us are used to; using microservices, we can update components of the application quickly without forcing a full recompile of the entire application. But it is not a “free ride” – this model poses new challenges to application developers and operations teams as many assumptions no longer hold true.

The recent rise of containerization has directly contributed to the uptake of microservices, as it is now very easy to quickly spin up a new, lightweight run-time environments for the application.

The ability to provide single-purpose components with clean APIs between them is an essential design requirement for microservices architecture. At their core, microservices have two main characteristics; they are stateless and distributed. To achieve this, let’s take a closer look at the Twelve-Factor App methodology in more detail to help explain microservices architecture as a whole.

THE TWELVE FACTOR APP

To allow the developer maximum flexibility in their choice of programming languages and back-end services, Software-as-a-Service web applications should be designed with the following characteristics:

  • Use of a declarative format to attempt to minimize or eliminate side effects by describing what the program should accomplish, rather than describing how to go about it. At a high level it’s the variance between a section of code and a configuration file.
  • Clean Contract with the underlying Operating Systems which enables portability to run and execute on any infrastructure. API’s are commonly used to achieve this functionality.
  • Ability to be deployed into modern cloud platforms; removing the dependencies on underlying hardware and platform.
  • Keep development, staging, and production as similar as possible.  Minimize the deviation between the two environments for continuous development.
  • Ability to scale up (and down) as the application requires without needing to change the tool sets, architecture or development practices.

At a high level, the 12 Factors that are used to achieve these characteristics are:

  1. Codebase – One codebase tracked in revision control, many deploys
  2. Dependencies – Explicitly declare and isolate dependencies
  3. Config – Store config in the environment
  4. Backing Services – Treat backing services as attached resources
  5. Build, release, run – Strictly separate build and run stages
  6. Process – Execute the app as one or more stateless processes
  7. Port Binding – Export services via port binding
  8. Concurrency – Scale out via the process model
  9. Disposability – Maximize robustness with fast startup and graceful shutdown
  10. Dev/Pro Parity – Keep development, staging, and production as similar as possible
  11. Logs – Treat logs as event streams
  12. Admin Process – Run admin/management tasks as one-off processes

For additional detailed information on these factors, check out 12factor.net.

BENEFIT OF MICROSERVICES

Microservice architecture has benefits and challenges. If the development and operating models in the company do not change, or only partially change, things could get muddled very quickly. Decomposing an existing app into hundreds of independent services requires some choreography and a well thought-out plan. So why are teams considering this move? Because there are considerable benefits!

Resilience

 With a properly architected microservice-based application, the individual services will function similarly to a bulkhead in a ship. Individual components can fail, but this does not mean the ship will sink. The following tenet is held closely by many development teams – “Fail fast, fail often.” The quicker a team is able to identify a malfunctioning module, the faster they can repair it and return to full operation.

Consider an online music player application – as a user, I might only care about playing artists in my library. The loss of the search functionality may not bother me at all. In the event that the Search service goes down, it would be nice if the rest of the application stays functional. The dev team is then able to fix the misbehaving feature independently of the rest of the application.

Defining “Service Boundaries” is important when architecting a microservice-based application!

Scaling

If a particular service is causing latency in your application, it’s trivial to scale up instances of that specific service if the application is designed to take full advantage of microservices. This is a huge improvement over monolithic applications.

Similar to the Resilience topic, with a monolithic application, one poorly-performing component can slow down the entire application. With microservices, it is almost trivial to scale up the service that is causing the latency. Once again, this scalability must be built into the application’s DNA to function properly.

Deployment

Once again, microservices allow components to be upgraded and even changed out for entirely new, heterogeneous pieces of technology without bringing down the entire application. Netflix pushes updates constantly to production code in exactly this manner.

Misbehaving code can be isolated and rolled back immediately. Upgrades can be pushed out, tested, and either rolled back or pushed out further if they have been successful.

Organizational

“Organizations which design systems … are constrained to produce designs which are copies of the communication structures of these organizations” –Melvin Conway

The underlying premise here is that the application should align to the business drivers, not to the fragmentation of the teams. Microservices allow for the creation of right-sized, more flexible teams that can more easily align to the business drivers behind the application. Hence, ideas like the “two pizza rule” in which teams should be limited to the number of people that can finish two pizzas in a sitting (conventional wisdom says this is eight or less…though my personal research has proved two pizzas do not feed more than four people.)

No Silver Bullet!

Microservices can be accompanied by additional operations overhead compared to the monolithic application provisioned to a application server cluster.  When each service is separately built out, they could each potentially require clustering for fail over and high availability.  When you add in load balancing, logging and messaging layers between these services, the real-estate starts to become sizable even in comparison to a large off the shelf application. Microservices also require a considerable amount of DevOps and Release Automation skills. The responsibility of ownership of the application does not end when the code is released into production, the Developer of the application essentially owns the application until it is retired. The natural evolution of the code and collaborative style in which it is developed can lend itself to challenges when making a major change to the components of the application.  This can be partially solved with backwards compatibility but it is not the panacea that some in the industry may claim.

Microservices can only be utilized in certain use cases and even then, Microservices open up a world of new possibilities that come with new challenges and operational hurdles. How do we handle stateful services? What about orchestration? What is the best way to store data in this model? How do we guarantee a data persistence model? Precisely how do I scale an application properly? What about “simple” things like DNS and content management?  Some of these questions do not have definitive solutions yet.  A distributed system can also introduce a new level of complexity that may not have been such a large concern like network latency, fault tolerance, versioning, and unpredictable loads in the application.  The operational cost of application developers needing to consider these potential issues in new scenarios can be high and should be expected throughout the development process.

When considering the adoption of a Microservices, ensure that the use case is sound, the team is aware of the potential challenges and above all, the benefits of this model outweigh the cost.

Recommended reading:  If you would like to learn more about the operational and feasibility considerations of Microservices, look up Benjamin Wootton and read some of his publications on the topic, specifically ‘Microservices – Not A Free Lunch!’.

Consideration to Deploy Edge Cluster in vCloud Director. Also discussing about VSAN Architecture.

If we are using vCloud Director, then Edge Cluster can not be combined with the management cluster. It can only be combined with the resource/payload/workload cluster.

If we are using VSAN, there are several consideration too if we want to use it as management cluster.

Please read these references to explain detail of the technical stuffs:

Once again, thanks Bayu for the discussion.

 

Kind Regards,
Doddi Priyambodo

VMware NSX Use Cases in Real World IT Production

NSX Overview

 


NSX Overview 3


These are some use cases for VMware NSX, detail of each use cases will be explained in another post thread.

Use Case 1 : Network Segmentation
Use Case 2 : Microsegmentation for Securing VDI Infrastructure
Use Case 3 : Intelligent Grouping for Unsupported Operating Systems
Use Case 4 : Automated Security in a Software Defined Data Center (ex: Quarantine zone)
Use Case 5 : Advanced Security (IDS/IPS) Insertion – Example: Palo Alto Networks NGFW
Use Case 6 : ‘Collapsed’ DMZ
Use Case 7 : Integrate Dev, Test and Prod environment into single infrastructure
Use Case 8 : Securing access to and from Jump Box servers
Use Case 9 : Multisite Networking and Security (Cross vCenter)
Use Case 10 : DC Consolidation/Migration – Mergers & Acquisitions
Use Case 11 : Hybrid/Public Clouds Integration
Use Case 12 : Disaster Recovery
Use Case 13 : Self Service IT
Use Case 14 : Fast Application Deployment of template
Use Case 15 : Islands of Unused Compute Capacity
Use Case 16 : Compute Asset Consolidation
Use Case 17 : Reducing capital outlay in expensive HW devices by NSX Edge Services

 

Kind Regards,
Doddi Priyambodo

Berapa Lama untuk melakukan migrasi dari server Physical to Virtual (P2V)?

Pada saat kita memutuskan untuk melakukan konversi ke vSphere Virtual Machine, akan ada proses untuk melakukan konversi dari physical machine atau virtual machine yang sebelumnya.
Kita biasa sebut sebagai Pyhysical to Virtual (P2V) atau Virtual to Virtual (V2V).

Ketika kita ingin menggunakan P2V/V2V, maka metode konversi ini akan melalui jaringan data. Biasanya paling rendah antara source dan destination menggunakan 1Gbps koneksi.
Tetapi jika environment network ini di-share dan tidak dedicated, kemungkinan throughput yaitu antara 20GB s/d 50GB data yang bisa ditransfer dalam 1 jam (perlu dianalisa langsung pada environment customer).
Proses konversi hanya akan mengirim data yang ter-utilisasi saja, misal jika disk besarnya adalah 300 GB tetapi hanya terisi 100 GB, maka data yang dikirim hanya 100 GB saja.

Berikut ini adalah rumus transfer:
Jumlah Data yang ditransfer = Jumlah VM atau Server x Jumlah Besar Disk x Utilisasi Disk

Waktu yang dibutuhkan = Jumlah Data yang ditransfer / Estimasi Throughput

Contoh:
Jika data yang ditransfer besarnya adalah 10 TB, dan throughput adalah 50 GB/hour. Maka waktu yang dibutuhkan adalah 200 jam (8 jam). Dengan asumsi kecepatan pengiriman data stabil.

 

Kind Regards,
Doddi Priyambodo

vRealize Automation 7.0 List of Improvements and it is GA now!

I am really excited about this news, because I implemented Distributed VRA 6.1 one year ago in one of my customer. And it was really complex installation experience. One of the improvement here is the installation mechanism, that will simplify the installations! A lot of new cool features now and integration too. Really Cool!

Following is an incomplete highlight of new features:

Streamlined and Automated Wizard-based Installation

  • Introduces management agent to automate the installation of Windows components and to collect logs
  • Automates the deployment of all vRealize Automation components
  • Installation wizards based on deployment needs: Minimal (Express) and Enterprise (Distributed) Installations

Simplified Deployment Architecture and High Availability Configuration

  • Embedded authentication service by using VMware Identity Manager
  • Converged Application Services in vRealize Automation Appliance
  • Reduced minimal number of appliances for HA configuration
  • Automated embedded PostgreSQL clustering with manual failover
  • Automated embedded vRealize Orchestrator clustering

Enhanced Authentication Service

  • Integrated user interface providing a common look and feel
  • Enabled multiple features by new authentication service

Simplified Blueprint Authoring for Infrastructure and Applications

  • Single unified model for both machine and application blueprints and unified graphical canvas for designing machine and application blueprint with dependencies and network topology
  • Software component (formerly software service in Application Services) authoring on vSphere, vCloud Air, vCloud Director, and AWS endpoints)
  • Extend or define external integrations in the canvas by using XaaS (formerly Advanced Service Design)
  • Enable team collaboration and role segregation by enhancing and introducing fine-grain roles
  • Blueprint as code and human-readable which can be created in editor of choice and stored in source control or import and export in the same or multiple vRealize Automation 7.0 instances
  • Customer-requested machine and application blueprints provided
  • Additional blueprints available on the VMware Solutions Exchange

Simplified and Enhanced NSX Support for Blueprint Authoring and Deployment

  • Dynamically configure NSX Network and micro-segmentation unique for each application
  • Automated connectivity to existing or on-demand networks
  • Micro-segmentation for application stack isolation
  • Automated security policy enforcement by using NSX security policies, groups, and tags
  • On-demand dedicated NSX load balancer

Simplified vRealize Automation REST API

  • Simplified schema for API requests by switching to normal JSON model
  • Follow-on request URIs and templates exposed as links in response bodies (HATEOAS)
  • New APIs to support business group and reservation management
  • Improved documentation and samples

Enhanced Cloud Support for vCloud Air and AWS

  • Software component authoring for vCloud Air, vCloud Director, and Amazon AWS
  • Simplified blueprint authoring for vCloud Air and vCloud Director
  • Improved vCloud Air endpoint configuration
  • Optional proxy configuration

Event-Based Extensibility Provided by Event Broker

  • Use vRealize Orchestrator workflows to subscribe any events triggered by most events happen in the system or custom events
  • Support blocking and non-blocking subscriptions
  • Provide administrative user interface for extensibility configurations

Enhanced Integration with vRealize Business

  • Unified location in vRealize Business to define flexible pricing policies for infrastructure resource, machine and application blueprints, and all type of endpoints in vRealize Automation
  • Support operational cost, one time cost and cost on custom properties
  • Role-based showback reports and fully leverage new features in vRealize Business 7.0

CloudClient Update

  • Content management (import and export blueprints between instances or tenants in vRealize Automation 7.0)
  • Existing functionality updated for vRealize Automation 7.0 APIs

vRealize Orchestrator 7 New Features

  • Introduce vRealize Orchestrator Control Center for easy monitoring and troubleshooting
  • Significant Smart Client improvements including Workflow tagging UI, Client reconnect options and enhanced search capabilities
  • vSphere 6.X  vAPI endpoint support

Other Improvements

  • Enhanced management of tenant, business group, approval, and entitlements
  • Customizable columns in the table for a given type of custom resource defined in XaaS
  • Accept a mix of license input, including vRealize Suite, vCloud Suite, and vRealize Automation Standalone
  • Improved stability, quality, and performance

 

The complete detail of the improvements can be read in here: http://pubs.vmware.com/Release_Notes/en/vra/vrealize-automation-70-release-notes.html

KUDOS! Great enhancement and innovations for VMware R&D Team!

 

Kind Regards,
Doddi Priyambodo

 

Pertanyaan Teknis yang diajukan saat vSphere Design during Requirement Analysis

Saya coba merangkum sekilas saja mengenai beberapa pertanyaan teknis dasar yang biasa diajukan saat kita melakukan Requirement Analysis / Design Workshop engagement dengan customer.

Berikut ini adalah beberapa high level questions yang biasa saya ajukan, dan melakukan penggalian lebih dalam berdasarkan pertanyaan tersebut. (Note: ini adalah pertanyaan2 teknis, jadi bukan diajukan ke business person or C level. So, to find the correct audience is important)

  • Compute: To gather information regarding the planned target Compute infrastructure
  • Storage: To understand the current and expected storage landscape
  • vCenter: To describe the state of vCenter to manage the ESXi environment
  • Network: To gather information around current and target network infrastructure
  • Backup & Patching: To understand the current backup and patching methodology.
  • Monitor: To analyze current and expected the Monitoring processes
  • VM Workloads: To analyzie the details of the current physical workloads to be virtualized and consolidated
  • Security: To understand detail the current security practices.
  • Processes & Operations: To understand the current operation procedures and processes
  • Availlaibility & Disaster Recovery: to gather information on Business Continuity Processes

Breakdown lebih detail dari pertanyaan tersebut diatas, bisa saja dilakukan lebih detail, contohnya sebagai berikut:

  • Compute: tipe hardware, network, disk, merk, redundancy, processor, koneksi storage, booting, automation, scalability, dll
  • Storage: SAN/NAS/iSCSI/NFS/VSAN, IOps, Latency, storage technology, cloning/snapshot, replication, dll
  • vCenter: linked mode, appliance, database decision, disk size, cpu memory size, pre-requirements, dll
  • Network: leaf spine, backbone technology, bandwith, VLAN, VXLAN, teaming, VPC, link aggregation, distributed switch, vendors, dll
  • Backup and Patching: storage backup, 3rd party backup, VDP, VADP, Update Manager, dll
  • Monitor: items to monitor, centralized log server, performance, capacity, usage, tresshold, alert, placement, dll
  • VM Workloads: user growth, IOps, Tier1/Tier2/Tier3, mission critical, OS clustering, Java/Oracle/SQL Server/SAP, dll
  • Security: firewall ports, virus protection, distributed firewall, hardening system, lockdown mode, access, dll
  • Processes and Operations: SLA agreements, private/public/hybrid strategy, budget/scope constraint, unique processes, dll
  • Availability & DR: RPO, RTO, VMware HA, Fault Tolerance, Active-Active DC. Bandwith and Hops, priority protected VMs, dll

Semoga bermanfaat.

Kind Regards,
Doddi Priyambodo