Best Practice Guide untuk menjalankan JAVA di atas VMware vSphere

vSphere saat ini sudah sangat bisa diandalkan untuk dapat menjalankan berbagai macam Business Critical Applications, dari berbagai macam programming language seperti Java, .NET, dan lain-lain. Database system dengan load yang tinggi seperti Billing, Analytics, dan lain-lain juga dapat didukung dengan sangat baik di vSphere baik menggunakan Oracle Database, SQL Server, dan lain-lain. Selain dapat memberikan performance yang baik, benefits terbesarnya yaitu mekanisme High Availability, dan mekanisme Operational serta Management yang lebih advanced untuk monitor kesehatan dari aplikasi ini. Best practice guide khusus untuk Java ada beberapa hal yang perlu diperhatikan secara umum, diantaranya adalah penggunaan memory pada Java Virtual Machine.

  • Sizing ukuran dari Memory yang ada di Virtual Machine untuk mencukupi konfigurasi dari Java Heap, dan memory yang dibuhkan oleh code dari JVM, serta beberapa memory lain yang sedan diproses oleh Guest Operating System tersebut.
  • Set  ukuran dari Memory Reservation di Virtual Machine tersebut sebesar memory yang dibutuhkan sesuai perhitungan diatas, atau set keseluruhan reservation dari size virtual machine tersebut (selama melebihi dari point diatas tadi). Ini disebabkan jika terjadi memory swapping, maka performance JVM heap akan turun terutama pada proses Garbage Collection.
  • Tentukan jumlah yang optimal dari virtual CPU pada virtual machine tersebut  dengan melakukan pengetesan dengan beberapa konfigurasi vCPU menggunakan load yang sama.
  • Jika menggunakan beberapa threads dalam proses Garbage Collector di JVM, maka pastikan bahwa jumlah thread tersebut adalah sejumlah besaran virtual CPU yang dikonfigurasikan di virtual machine.
  • Untuk mempermudah monitoring dan maintenance, sebaiknya gunakan satu buah JVM process per- virtual machine.
  • Selalu nyalakan Balloon Driver, karena jika terjadi overcommitment maka virtual machine dapat mengatur memory-nya dengan mekanisme ini.

Secara summary, tuntunan best practice guide ini dapat didownload dari link ini:

PS:
– Pada posting saya sebelumnya, saya sempat mengulas mengenai Best Practice untuk menjalankan Oracle Database diatas vSphere. >> http://bicarait.com/?s=oracle+database

Running your Docker in Production Environment using VMware vSphere Integrated Containers – (Part 1)

In this tutorial, after explaining about running Docker in my Mac. Now, it’s time to move those dockers on your laptop to production environment. In VMware, we will utilise vSphere ESXi as the production grade virtualisation technology as the foundation of the infrastructure.

In production environment, lot of things need to be considered. From availability, manageability, performance, reliability, scalability, security (AMPRSS). This AMPRSS considerations can be easily achieved by implementing docker container from your development environment (laptop) to the production environment (vSphere ESXi). One of the concern of docker technology is the containers share the same kernel and are therefore less isolated than real VMs. A bug in the kernel affects every container.

vSphere Integrated Containers Engine will allow developers familiar with Docker to develop in containers and deploy them alongside traditional VM-based workloads on vSphere clusters, and allowing for these workloads to be managed through the vSphere UI in a way familiar to existing vSphere admins.

Docker itself is far less capable than actual hypervisor. It doesn’t come with HA, live migration, hardware virtualization security, etc. VIC (VMware Integrated Containers) brings the container paradigm directly to the hypervisor, allowing you to deploy containers as first-class citizens. The net result is that containers inherit all of the benefits of VMs, because they are VMs. The Docker image, once instantiated, becomes a VM inside vSphere. This solves security as well as operational concerns at the same time.

But these are NOT traditional VMs that require for example 2TB and take 2 minutes to boot. These are usually as big as the Docker image itself and take a few seconds to instantiate. They boot from a minimal ISO which contains a stripped-out Linux kernel (based on Photon OS), and the container images and volumes are attached as disks.

The ContainerVMs are provisioned into a “Virtual Container Host” which is just like a Swarm cluster, but implemented as logical distributed capacity in a vSphere Resource Pool. You don’t need to add or remove physical nodes to increase or decrease the VCH capacity, you simply re-configure its resource limits and let vSphere clustering and DRS (Distributed Resource Scheduler) handle the details.

The biggest benefit of VIC is that it helps to draw a clear line between the infrastructure provider (IT admin) and the consumer (developer/ops). The consumer wins because they don’t have deal with managing container hosts, patching, configuring, etc. The provider wins because they can leverage the operational model they are already using today (including NSX and VSAN).

Developers will continue to develop dockers and IT admin will keep managing VMs. The best of both worlds.

It also can be combined with other enterprise tool to manage the Enterprise environment, such as vRealize Operations, vRealize Log Insight, Virtual SAN, VMware NSX, vRealize Automations.

In this post, I will utilise these technologies from VMware:

  • vSphere ESXi 6 U2 as the number one, well-known and stable production grade Virtualisation Technology.
  • vCenter 6 U2 as the Virtualisation central management and operation tool.
  • vSphere Integrated Containers as the Enterprise Production Ready container runtime for vSphere, allowing developers familiar with Docker to develop in containers and deploy them alongside traditional VM-based workloads on vSphere clusters. Download from here: The vSphere Integrated Containers Engine
  • VMware Admiral as the Container Management platform for deploying and managing container based applications. Provides a UI for developers and app teams to provision and manage containers, including retrieving stats and info about container instances. Cloud administrators will be able to manage container hosts and apply governance to its usage, including capacity quotas and approval workflows. Download from here: Harbor
  • VMware Harbor as an enterprise-class registry server that stores and distributes Docker images. Have a UI and functionalities usually required by an enterprise, such as security, identity, replication, and management. Download from here: Admiral

This is the diagram block for those components:

As you can see in the diagram above vSphere Integrated Containers is comprised of three main components, all of which are available as open source on github. With these three capabilities, vSphere Integrated Containers will enable VMware customers to deliver a production-ready container solution to their developers and app teams.

 

*to be continued in part 2.

Kind Regards,
Doddi Priyambodo

MICROSERVICES – What is Cloud Native Application?

DevOps, Containers, Docker, Mesos, Kubernetes, microservices, 12-factor applications, 3rd platform, oh my!   Modern application architecture and lifecycle is changing fast and that means even more demands on IT.  While some have argued that this new application approach calls for a whole new infrastructure,  actually these new business-driven demands head on, leveraging your existing investment while still delivering the highest SLAs – performance, availability, security, compliance, and disaster recovery.  This emerging 3rd Platform Application stack not only fits into existing SDDC infrastructure investments but is actually the best place to run containers and emerging 3rd platform applications.

Application Development and Delivery

 

If we look at the Outcomes Delivered from a new model of IT, Businesses are increasing their focus on App and Infrastructure Delivery Automation throughout the datacenter.

3RD PLATFORM – MICROSERVICES

3rd Platform! Microservices! What the heck are they? Put simply, the 3rd platform is this is a new paradigm for architecting applications to operate in a distributed fashion. While the 1st platform was designed around mainframes and the 2nd platform was designed around client-server, the 3rd platform is designed around the cloud. In other words, applications are designed and built to live in the cloud. We can effectively think of this as pushing many of the core infrastructure concepts (like availability and scale) into the architecture of the application itself with containers being a large part of this; they can be thought of as lightweight runtimes for these applications. With proper application architecture and a rock solid foundation either on-premise or in the cloud, applications can scale on demand, new versions can be pushed quickly, components can be rebuilt and replaced easily, as well as many other benefits discussed below.

History of Platforms

1st Platform systems were based around mainframes and traditional servers without virtualization. Consolidation was a serious issue and it was normal to run one application per physical server.

2nd Platform architectures have been the standard mode for quite a while. This is the traditional Client/Server/Database model with which you are likely very familiar, leveraging the virtualization of x86 hardware to increase consolidation ratios, add high availability and extremely flexible and powerful management of workloads.

3rd Platform moves up the stack, standardizing on Linux Operating Systems primarily, which allows developers to focus on the application exclusively. Portability, scalability and highly dynamic environments are valued highly in this space. We will focus on this for the rest of the module.

Does this mean you should immediately move all of your applications to this model? Not so fast! While 3rd Platform architectures are exciting and extremely useful, they will not be the answer for everyone. A thorough understanding of the benefits and, more importantly the complexities in this new world are extraordinarily important. VMware’s Cloud-Native Apps group is dedicated to ensuring our customers are well informed in this space and can adopt this technology confidently and securely when the time is right.

Microservices are growing in popularity, due in no small part to companies like Netflix and Paypal that have embraced this relatively new model. When we consider microservices, we need to understand both the benefits and the limitations inherent in the model, as well as ensure we fully understand the business drivers.

At its heart, microservice architecture is about doing one thing and doing it well. Each microservice has one job. This is clearly in stark contrast to the monolithic applications many of us are used to; using microservices, we can update components of the application quickly without forcing a full recompile of the entire application. But it is not a “free ride” – this model poses new challenges to application developers and operations teams as many assumptions no longer hold true.

The recent rise of containerization has directly contributed to the uptake of microservices, as it is now very easy to quickly spin up a new, lightweight run-time environments for the application.

The ability to provide single-purpose components with clean APIs between them is an essential design requirement for microservices architecture. At their core, microservices have two main characteristics; they are stateless and distributed. To achieve this, let’s take a closer look at the Twelve-Factor App methodology in more detail to help explain microservices architecture as a whole.

THE TWELVE FACTOR APP

To allow the developer maximum flexibility in their choice of programming languages and back-end services, Software-as-a-Service web applications should be designed with the following characteristics:

  • Use of a declarative format to attempt to minimize or eliminate side effects by describing what the program should accomplish, rather than describing how to go about it. At a high level it’s the variance between a section of code and a configuration file.
  • Clean Contract with the underlying Operating Systems which enables portability to run and execute on any infrastructure. API’s are commonly used to achieve this functionality.
  • Ability to be deployed into modern cloud platforms; removing the dependencies on underlying hardware and platform.
  • Keep development, staging, and production as similar as possible.  Minimize the deviation between the two environments for continuous development.
  • Ability to scale up (and down) as the application requires without needing to change the tool sets, architecture or development practices.

At a high level, the 12 Factors that are used to achieve these characteristics are:

  1. Codebase – One codebase tracked in revision control, many deploys
  2. Dependencies – Explicitly declare and isolate dependencies
  3. Config – Store config in the environment
  4. Backing Services – Treat backing services as attached resources
  5. Build, release, run – Strictly separate build and run stages
  6. Process – Execute the app as one or more stateless processes
  7. Port Binding – Export services via port binding
  8. Concurrency – Scale out via the process model
  9. Disposability – Maximize robustness with fast startup and graceful shutdown
  10. Dev/Pro Parity – Keep development, staging, and production as similar as possible
  11. Logs – Treat logs as event streams
  12. Admin Process – Run admin/management tasks as one-off processes

For additional detailed information on these factors, check out 12factor.net.

BENEFIT OF MICROSERVICES

Microservice architecture has benefits and challenges. If the development and operating models in the company do not change, or only partially change, things could get muddled very quickly. Decomposing an existing app into hundreds of independent services requires some choreography and a well thought-out plan. So why are teams considering this move? Because there are considerable benefits!

Resilience

 With a properly architected microservice-based application, the individual services will function similarly to a bulkhead in a ship. Individual components can fail, but this does not mean the ship will sink. The following tenet is held closely by many development teams – “Fail fast, fail often.” The quicker a team is able to identify a malfunctioning module, the faster they can repair it and return to full operation.

Consider an online music player application – as a user, I might only care about playing artists in my library. The loss of the search functionality may not bother me at all. In the event that the Search service goes down, it would be nice if the rest of the application stays functional. The dev team is then able to fix the misbehaving feature independently of the rest of the application.

Defining “Service Boundaries” is important when architecting a microservice-based application!

Scaling

If a particular service is causing latency in your application, it’s trivial to scale up instances of that specific service if the application is designed to take full advantage of microservices. This is a huge improvement over monolithic applications.

Similar to the Resilience topic, with a monolithic application, one poorly-performing component can slow down the entire application. With microservices, it is almost trivial to scale up the service that is causing the latency. Once again, this scalability must be built into the application’s DNA to function properly.

Deployment

Once again, microservices allow components to be upgraded and even changed out for entirely new, heterogeneous pieces of technology without bringing down the entire application. Netflix pushes updates constantly to production code in exactly this manner.

Misbehaving code can be isolated and rolled back immediately. Upgrades can be pushed out, tested, and either rolled back or pushed out further if they have been successful.

Organizational

“Organizations which design systems … are constrained to produce designs which are copies of the communication structures of these organizations” –Melvin Conway

The underlying premise here is that the application should align to the business drivers, not to the fragmentation of the teams. Microservices allow for the creation of right-sized, more flexible teams that can more easily align to the business drivers behind the application. Hence, ideas like the “two pizza rule” in which teams should be limited to the number of people that can finish two pizzas in a sitting (conventional wisdom says this is eight or less…though my personal research has proved two pizzas do not feed more than four people.)

No Silver Bullet!

Microservices can be accompanied by additional operations overhead compared to the monolithic application provisioned to a application server cluster.  When each service is separately built out, they could each potentially require clustering for fail over and high availability.  When you add in load balancing, logging and messaging layers between these services, the real-estate starts to become sizable even in comparison to a large off the shelf application. Microservices also require a considerable amount of DevOps and Release Automation skills. The responsibility of ownership of the application does not end when the code is released into production, the Developer of the application essentially owns the application until it is retired. The natural evolution of the code and collaborative style in which it is developed can lend itself to challenges when making a major change to the components of the application.  This can be partially solved with backwards compatibility but it is not the panacea that some in the industry may claim.

Microservices can only be utilized in certain use cases and even then, Microservices open up a world of new possibilities that come with new challenges and operational hurdles. How do we handle stateful services? What about orchestration? What is the best way to store data in this model? How do we guarantee a data persistence model? Precisely how do I scale an application properly? What about “simple” things like DNS and content management?  Some of these questions do not have definitive solutions yet.  A distributed system can also introduce a new level of complexity that may not have been such a large concern like network latency, fault tolerance, versioning, and unpredictable loads in the application.  The operational cost of application developers needing to consider these potential issues in new scenarios can be high and should be expected throughout the development process.

When considering the adoption of a Microservices, ensure that the use case is sound, the team is aware of the potential challenges and above all, the benefits of this model outweigh the cost.

Recommended reading:  If you would like to learn more about the operational and feasibility considerations of Microservices, look up Benjamin Wootton and read some of his publications on the topic, specifically ‘Microservices – Not A Free Lunch!’.

Scrum Day Asia 20121123 – AGILE SOFTWARE DEVELOPMENT LIFE CYCLE USING SCRUM

In this post, I will share one of my presentation slide when implementing agile methodology for software development in one of my previous company.

This full presentation material was conducted at Scrum Day Asia event (Nov 23rd, 2012) in Bandung, Indonesia. Me and other speakers (Joshua Partogi, Salma Desenta, Wirawan Winarto) were hoping to change the nature of SDLC from traditional to agile, and making software developers to be a rockstar team!. Also presented about this topic (with a different slide, but almost the same) in other events such as Project Management Institute event in Microsoft, and IBM Innovation Day event.

I personally love the page number 57 in this slide. In one of our Scrum Retrospective Meeting, one of my team member said (put a post-it in the restrospective wall) : “with scrum, it’s not our fault. not your or my fault. it’s our problem. it’s always ours. not yours or mine”

 

Semoga bermanfaat. Selamat menikmati.

Kind Regards,

Doddi Priyambodo

How to do agile Software Development menggunakan Scrum?

Berikut ini saya lampirkan penjelasan mengenai cara software development dengan mengimplementasikan metode scrum. Dimana scrum ini adalah salah satu metode/practice untuk mengimplementasikan agile software development. Saya membuat slide ini beberapa tahun yang lalu, saya rasa masih sangat relevan untuk dunia saat ini. Bahkan, saya rasa lebih relevan untuk diimplementasikan saat ini daripada dulu!

Pada seri berikutnya, akan saya lampirkan juga tools apa yang saya gunakan waktu mengimplementasikan scrum ini di team saya.

 

Selamat menikmati 🙂

Review: Puppet vs. Chef vs. Ansible vs. Salt

Once again, I am taking this article from another website (http://www.infoworld.com/d/data-center/review-puppet-vs-chef-vs-ansible-vs-salt-231308). It is a very good article that I would like to remember. So, that is the reason why I re-post it again in my blog.

Review: Puppet vs. Chef vs. Ansible vs. Salt

The leading configuration management and orchestration tools take different paths to server automation

 

The proliferation of virtualization coupled with the increasing power of industry-standard servers and the availability of cloud computing has led to a significant uptick in the number of servers that need to be managed within and without an organization. Where we once made do with racks of physical servers that we could access in the data center down the hall, we now have to manage many more servers that could be spread all over the globe.

This is where data center orchestration and configuration management tools come into play. In many cases, we’re managing groups of identical servers, running identical applications and services. They’re deployed on virtualization frameworks within the organization, or they’re running as cloud or hosted instances in remote data centers. In some cases, we may be talking about large installations that exist only to support very large applications or large installations that support myriad smaller services. In either case, the ability to wave a wand and cause them all to bend to the will of the admin cannot be discounted. It’s the only way to manage these large and growing infrastructures.

[ Read the individual reviews: Puppet • Chef • Ansible • Salt | Puppet or Chef: The configuration management dilemma | Subscribe to InfoWorld’s Data Center newsletter to stay on top of the latest developments. ]

PuppetChefAnsible, and Salt were all built with that very goal in mind: to make it much easier to configure and maintain dozens, hundreds, or even thousands of servers. That’s not to say that smaller shops won’t benefit from these tools, as automation and orchestration generally make life easier in an infrastructure of any size.

I looked at each of these four tools in depth, explored their design and function, and determined that, while some scored higher than others, there’s a place for each to fit in, depending on the goals of the deployment. Here, I summarize my findings.

Puppet Enterprise
Puppet arguably enjoys the biggest mind share of the four. It’s the most complete in terms of available actions, modules, and user interfaces. Puppet represents the whole picture of data center orchestration, encompassing just about every operating system and offering deep tools for the main OSes. Initial setup is relatively simple, requiring the installation of a master server and client agents on each system that is to be managed.

From there, the CLI (command-line interface) is straightforward, allowing module downloads and installation via the puppet command. Then, changes to the configuration files are required to tailor the module for the required task, and the clients that should receive the instructions will do so when they check in with the master or via a push that will trigger the modifications immediately.

There are also modules that can provision and configure cloud server instances and virtual server instances. All modules and configurations are built with a Puppet-specific language based on Ruby, or Ruby itself, and thus will require programmatic expertise in addition to system administration skills.

 

Test Center Scorecard
20% 20% 20% 20% 10% 10%
AnsibleWorks Ansible 1.3 9 7 8 8 9 9
8.2
VERY GOOD
20% 20% 20% 20% 10% 10%
Enterprise Chef 11.4 9 8 7 9 8 9
8.3
VERY GOOD
20% 20% 20% 20% 10% 10%
Puppet Enterprise 3.0 9 9 9 9 9 9
9.0
EXCELLENT
20% 20% 20% 20% 10% 10%
SaltStack Enterprise 0.17.0 9 8 9 9 9 9
8.8
VERY GOOD

Puppet Enterprise has the most complete Web UI of the bunch, allowing for real-time control of managed nodes using prebuilt modules and cookbooks present on the master servers. The Web UI works well for management, but does not allow for much configuration of modules. The reporting tools are well developed, providing deep details on how agents are behaving and what changes have been made.

Enterprise Chef
Chef is similar to Puppet in terms of overall concept, in that there’s a master server and agents installed on managed nodes, but it differs in actual deployment. In addition to a master server, a Chef installation also requires a workstation to control the master. The agents can be installed from the workstation using the knife tool that uses SSH for deployment, easing the installation burden. Thereafter, managed nodes authenticate with the master through the use of certificates.

Continue reading Review: Puppet vs. Chef vs. Ansible vs. Salt

Pengalaman Google dalam mengimplementasikan SCRUM

Berikut ini saya ingin mencuplik hasil browsing saya di Internet mengenai Scrum. Saya menemukan salah satu presentasi yang cukup baik mengenai cara Jeff Sutherland mereview pengalaman implementasi dari SCRUM di perusahaan raksasa GOOGLE.

Tolong lihat aja deh dari Original Link ini : (karena video-nya tidak bisa saya embed kesini, lagipula ada presentasinya juga disana)

http://www.infoq.com/presentations/Agile-Management-Google-Jeff-Sutherland

Summary
A retrospective on Google’s first Scrum implementation. Jeff Sutherland visited Google to do an analysis of the first Google implementation of Scrum on one of their largest distributed projects. Their strategy for inserting Scrum step by step into the Google engineering teams showed great insight and provides helpful lessons learned for all Agile teams.

Bio
Well known as the Co-Creator of the Scrum Agile Development Process which influenced the design of the other leading Agile process in the U.S., i.e. eXtreme Programming (XP). Scrum is a team organization process that brings focus, clarity, and enthusiasm to any project team in any domain.

About the conference
QCon is a conference that is organized by the community, for the community.The result is a high quality conference experience where a tremendous amount of attention and investment has gone into having the best content on the most important topics presented by the leaders in our community.QCon is designed with the technical depth and enterprise focus of interest to technical team leads, architects, and project managers.

Happy Scrumming 😀

Pembangunan Perangkat Lunak menggunakan Metodologi AGILE

Software Development Lifecycle (SDLC), atau pembangunan perangkat lunak adalah suatu hal yang sangat menarik untuk dipelajari karena SDLC memiliki banyak strategi dalam pelaksanaannya. Information Technology Project Management sendiri adalah suatu pembahasan yang sangat unik dan sangat dinamis. Setelah kami melakukan banyak penelitian, pelatihan, dan melakukan/pengalaman dalam pelaksanaannya, maka kami menyimpulkan bahwa dibandingkan dengan metodologi pembangunan aplikasi (software project management) yang lain seperti Waterfall, Agile lebih memiliki fleksibilitas dalam menangani perubahan. Secara garis besar, flow dari Agile adalah sebagai berikut :

Alasan utama fleksibilitas inilah membuat kami memutuskan untuk menggunakan Agile Methodology. Sebenarnya ada banyak pilihan metodologi untuk mengimplementasikan agile, yaitu Agile Modeling, Agile Unified Process (AUP), Dynamic Systems Development Method (DSDM), Essential Unified Process (EssUP), Extreme Programming (XP), Feature Driven Development (FDD), Open Unified Process (OpenUP), Scrum, Velocity tracking. Kami ingin mengkombinasikan agile ini dengan framework besar SDLC yang lebih komplit (hanya sebagai acuan referensi saja, tetapi utamanya tetap pada agile). Pilihannya ada dua buah, yaitu Microsoft Solution Framework (MSF) dan Rational Unified Process (RUP). Setelah melakukan penelitian yang cukup lama, akhirnya kami memutuskan untuk menggunakan RUP yang mengimplementasikan konsep Agile Scrum di dalamnya.

Regards,
Doddi Priyambodo