Berkenalan dengan AWS DynamoDB – Platform NoSQL Database dari Amazon Web Services

Q: What is Amazon DynamoDB?
DynamoDB is a fast and flexible nonrelational database service for any scale. DynamoDB enables customers to offload the administrative burdens of operating and scaling distributed databases to AWS so that they don’t have to worry about hardware provisioning, setup and configuration, throughput capacity planning, replication, software patching, or cluster scaling.
Q: What does DynamoDB manage on my behalf?
DynamoDB takes away one of the main stumbling blocks of scaling databases: the management of database software and the provisioning of the hardware needed to run it. You can deploy a nonrelational database in a matter of minutes. DynamoDB automatically scales throughput capacity to meet workload demands, and partitions and repartitions your data as your table size grows. Also, DynamoDB synchronously replicates data across three facilities in an AWS Region, giving you high availability and data durability.

Overview of DynamoDB – the Scalability, the Security, and the Availability of the service

  • It is a non-relational #NoSQL Database, which can be used as Key-Value data or Document-Store data strategy for your serverless service  implementation.
  • It will be automatically saved in 3 (three) copies in the different facilities to create the High Availability strategy for the service, and the data is saved to high speed SSD! to create a high performance strategy. It can be configured to replicate the data to another region too to add more highly available strategy if you want to.
  • The database is designed to be scalable without limits!
  • It has complete security protection, from isolated network access, complete logs, monitor and alert system, fine grained access, and data encryption with key management system.

Ref Image:

Q: Can DynamoDB be used by applications running on any operating system?
Yes. DynamoDB is a fully managed cloud service that you access via API. Applications running on any operating system (such as Linux, Windows, iOS, Android, Solaris, AIX, and HP-UX) can use DynamoDB. We recommend using the AWS SDKs to get started with DynamoDB.

Q: How am I charged for my use of DynamoDB?
Each DynamoDB table has provisioned read-throughput and write-throughput associated with it. You are billed by the hour for that throughput capacity if you exceed the free tier. Note that you are charged by the hour for the throughput capacity, whether or not you are sending requests to your table. If you would like to change your table’s provisioned throughput capacity, you can do so using the AWS Management Console, the UpdateTable API, or the PutScalingPolicy API for auto scaling. Also, DynamoDB charges for data storage as well as the standard internet data transfer fees.
To learn more about DynamoDB pricing, see the DynamoDB pricing page.
Please notes that DynamoDB has a lot of Free Tier capabilities in it, if you are a SME business then most probably you will not exceed the Free Tier services. I will say… with all the capabilities and reliabilities… it’s CRAZY!
Free Tier Services, include:
25 GB PER MONTH of data storage (indexed)
200 MILLION REQUESTS PER MONTH through 25 write capacity units and 25 read capacity units

You pay for only the resources you provision beyond these free tier limits. The DynamoDB free tier applies to all tables in a region and does not expire at the end of your 12-month AWS Free Tier.

Q: What is the maximum throughput I can provision for a single DynamoDB table?
DynamoDB is designed to scale without limits. However, if you want to exceed throughput rates of 10,000 write capacity units or 10,000 read capacity units for an individual table, you must first contact Amazon. If you want to provision more than 20,000 write capacity units or 20,000 read capacity units from a single subscriber account, you must first contact us to request a limit increase.
Q: What is the minimum throughput I can provision for a single DynamoDB table?
The smallest provisioned throughput you can request is 1 write capacity unit and 1 read capacity unit for both auto scaling and manual throughput provisioning. Such provisioning falls within the free tier which allows for 25 units of write capacity and 25 units of read capacity. The free tier applies at the account level, not the table level. In other words, if you add up the provisioned capacity of all your tables, and if the total capacity is no more than 25 units of write capacity and 25 units of read capacity, your provisioned capacity would fall into the free tier.

It’s all about SCALABILITY, SECURITY, and AVAILABILITY for your most important service! (which is your database)

Kind Regards,
Doddi Priyambodo

Best Practice Guide untuk menjalankan JAVA di atas VMware vSphere

vSphere saat ini sudah sangat bisa diandalkan untuk dapat menjalankan berbagai macam Business Critical Applications, dari berbagai macam programming language seperti Java, .NET, dan lain-lain. Database system dengan load yang tinggi seperti Billing, Analytics, dan lain-lain juga dapat didukung dengan sangat baik di vSphere baik menggunakan Oracle Database, SQL Server, dan lain-lain. Selain dapat memberikan performance yang baik, benefits terbesarnya yaitu mekanisme High Availability, dan mekanisme Operational serta Management yang lebih advanced untuk monitor kesehatan dari aplikasi ini. Best practice guide khusus untuk Java ada beberapa hal yang perlu diperhatikan secara umum, diantaranya adalah penggunaan memory pada Java Virtual Machine.

  • Sizing ukuran dari Memory yang ada di Virtual Machine untuk mencukupi konfigurasi dari Java Heap, dan memory yang dibuhkan oleh code dari JVM, serta beberapa memory lain yang sedan diproses oleh Guest Operating System tersebut.
  • Set  ukuran dari Memory Reservation di Virtual Machine tersebut sebesar memory yang dibutuhkan sesuai perhitungan diatas, atau set keseluruhan reservation dari size virtual machine tersebut (selama melebihi dari point diatas tadi). Ini disebabkan jika terjadi memory swapping, maka performance JVM heap akan turun terutama pada proses Garbage Collection.
  • Tentukan jumlah yang optimal dari virtual CPU pada virtual machine tersebut  dengan melakukan pengetesan dengan beberapa konfigurasi vCPU menggunakan load yang sama.
  • Jika menggunakan beberapa threads dalam proses Garbage Collector di JVM, maka pastikan bahwa jumlah thread tersebut adalah sejumlah besaran virtual CPU yang dikonfigurasikan di virtual machine.
  • Untuk mempermudah monitoring dan maintenance, sebaiknya gunakan satu buah JVM process per- virtual machine.
  • Selalu nyalakan Balloon Driver, karena jika terjadi overcommitment maka virtual machine dapat mengatur memory-nya dengan mekanisme ini.

Secara summary, tuntunan best practice guide ini dapat didownload dari link ini:

– Pada posting saya sebelumnya, saya sempat mengulas mengenai Best Practice untuk menjalankan Oracle Database diatas vSphere. >>

Oracle Database Standard Edition 2 Compare to other Editions

If you read this blog, I also have specific part talking about Oracle Database. Several years ago actually I was also an Oracle Database Administrator for Oracle 9i, 10gR2, and 11gR2 doing operational such as architecture design, deployment, performance tuning, backup, replication, clustering, and PL/SQL programming. But, currently I found cloud technology is more interesting than on-premise database technology. So, that’s one of the reason why I move my focus to Cloud Technology (read, VMware). Anyway, now the current version of Oracle Database available is (12cR1).
In this post I would like to elaborate more regarding new licensing scheme from Oracle since version came. The introduction of Oracle Standard Edition 2 version. This is a brief explanation from Oracle’s license document:
Oracle Database Standard Edition 2 may only be licensed on servers that have a maximum capacity of 2 sockets. When used with Oracle Real Application Clusters, Oracle Database Standard Edition 2 may only be licensed on a maximum of 2 one-socket servers. In addition, notwithstanding any provision in Your Oracle license agreement to the contrary, each Oracle Database Standard Edition 2 database may use a maximum of 16 CPU threads at any time. When used with Oracle Real Application Clusters, each Oracle Database Standard Edition 2 database may use a maximum of 8 CPU threads per instance at any time. The minimums when licensing by Named User Plus (NUP) metric are 10 NUP licenses per server.
These are some notes for the customer after reading the statement above, and other notes:

  • Oracle Database Standard Edition 2 (SE2) will replace SE and SE1 from version
  • SE2 will have a limitation of maximum 2 socket systems and a total of 16 CPU threads*
    • *note not cores!
    • SE2 is hard coded in Resource Manager to use no more than 16 CPU threads.
  • RAC is till included with SE2 but is restricted to 2 sockets across the cluster. Therefore, each server must be single socket.
  • SE One and SE will no longer be available to purchase from 10th November 2015.
  • If you need to purchase additional DB SE and SE One Licenses you must purchase SE2 instead and install the version of 11g as required from here. Note – you must still comply with the license rules for SE2.
  • Oracle is offering a FREE license migration from SE One* and SE to SE2.
    • *SE One customers will have to pay a 20% increase in support as part of the migration.
    • SE customers face no other cost increases for license or support, subject to Named User minimums being met.
  • Named user minimums for SE2 are now 10 per server
  • was the last SE and SE1 release
  • SE and SE1 customers will have 6 months of patching support once SE2 is released with quarterly patches still being available in Oct 2015 and Jan 2016.

Now, compare to other versions. These are the features that is available in SE2 compare to Oracle Database Enterprise Edition:
Continue reading Oracle Database Standard Edition 2 Compare to other Editions

Why do we need to Virtualize our Oracle Database

Usually customer would like to expand the benefits that they already achieved using virtualization (financial, business and operational benefits of virtualization within its operating environment) to another level. For example to Business Critical Applications such as Oracle Database, thereby reaping the many benefits and advantages through its adoption of this infrastructure.

Customer aims to achieve the following benefits:

  • Effectively utilise datacenter resources, as in traditional physical servers a lot of database server only utilize 30% of the resources.
  • Maximise availability of the Oracle environment at lower cost, as virtualization can give another layer of high availability.
  • Rapidly deploy Oracle database servers for development, testing & production, as virtualization can have templates and automation.
  • Maximise uptime during planned maintenance, as virtualization can give the ability to move database to another machine without any downtime for the workload.
  • Minimise planned and unplanned downtime, as virtualization can give better disaster recovery avoidance and disaster recovery actions.
  • Automated testing and failover of Oracle datacenter environments for disaster recovery and business continuity.
  • Achieve IT Compliance, as we have better monitoring systems, audit mechanism, policy enforcement, and asset managements.
  • Minimise Oracle datacenter costs for floor space, energy, cooling, hardware and labour, as some physical servers can be consolidated into just several physical servers. This will give customer a better TCO/ROI compare to physical servers approach.

Kind Regards,
Doddi Priyambodo

What is Hadoop? Why do we need to virtualize it using VMware?

What is Hadoop?
Hadoop is an Apache open source project that provides scalable and distributed computing, originally built by Yahoo!. It provides a framework that can process large amounts of data by leveraging the parallel and distributed processing of many compute nodes arrayed in a cluster. These clusters can be configured as a single host or scaled up to utilize thousands of machines depending on the workload.
What are Hadoop Components?
These are the core modules of Hadoop, which build the capabilities to conduct distributed computing capabilities.

  • Hadoop Common – The utilities that support the other Hadoop modules.
  • Hadoop Distributed File System – The distributed file system used by most Hadoop distributions . Also known by its initials, HDFS.
  • Hadoop YARN – Used to manage cluster resources and schedule jobs.
  • Hadoop Map Reduce – YARN based system of processing large amounts of data.

In addition to the core modules, there are others that provide specific and specialized capabilities to this distributed processing framework. These are just some of the tools:

  • Ambari – A web-based tool for provisioning, management, and monitoring of Hadoop clusters.
  • HBase – Distributed database that supports structured data storage.
  • Hive – Data warehouse model with data summarization and ad hoc query capability.
  • Pig – Data flow language.
  • ZooKeeper – Coordination service for distributed applications.

These are modules available from the Apache open-source project, but there are also more than thirty  companies that provide Hadoop distributions that include the open-source code as well as adding competing management solutions, processing engines, and many other features.  Some of the best known and widest used are distributed from Cloudera, MapR, and Hortonworks.
Why do we need to Virtualize Hadoop workloads?
Now, after we know about Hadoop. We always discuss about virtualization in this blog. Is hadoop suitable to be virtualized? Yes, if you would like to have these additional values for Hadoop. Then you should consider to virtualize the workload.

  • Better resource utilization:
    Collocating virtual machines containing Hadoop roles with virtual machines containing different workloads on the same set of VMware ESXi™ server hosts can balance the use of the system. This leads to lower operating expenses and lower capital expenses as you can leverage the existing infrastructure and skills in the data center and you do not have to invest in bare-metal servers for your Hadoop deployment.
  • Alternative storage options:
    Originally, Hadoop was developed with local storage in mind, and this type of storage scheme can be used with vSphere also. The shared storage that is frequently used as a basis for vSphere can also be leveraged for Hadoop workloads. This re-enforces leveraging the existing investment in storage technologies for greater efficiencies in the enterprise.
  • Isolation:
    This includes running different versions of Hadoop itself on the same cluster or running Hadoop alongside other applications, forming an elastic environment, or different Hadoop tenant. Isolation can reduce your overall security risk, ensure you are meeting your SLA’s, and support Hadoop as a service back to the lines of business.
  • Availability and fault tolerance:
    The NameNode, the Resource Manager and other Hadoop components, such as Hive Metastore and HCatalog, can be single points of failure in a system. vSphere services such as VMware vSphere High Availability (vSphere HA) and VMware vSphere Fault Tolerance (vSphere FT) can protect these components from server failure and improve availability.
  • Balance the loads:
    Resource management tools such as VMware vSphere vMotion® and VMware vSphere Distributed Resource Scheduler™ (vSphere DRS) can provide availability during planned maintenance and can be used to balance the load across the vSphere cluster.
  • Business critical applications:
    Uptime consideration is just as important in a Hadoop environment, why would the enterprise want to go back in time to a place where the servers and server components were single points of failure. Leverage the existing investment in vSphere to drive meeting SLA’s and providing an excellent service back to the business.

VMware also have the component called VMWARE BIG DATA EXTENSIONS (, to rapidly deploy High Available Hadoop components and easily manage the infrastructure workloads
vSphere Big Data Extensions enables rapid deployment, management, and scalability of Hadoop in virtual and cloud environments. It also has the functionality to do scale in/out capabilities built into Big Data Extensions tools enables on-demand Hadoop instances.
Simple cloning to sophisticated end-user provisioning products such as VMware vRealize Automation™ can speed up the deployment of Hadoop. This enables IT to be a service provider back to the business and provide Hadoop as a service back to the different lines of business, providing faster time to market. This will further enable today’s IT to be a value driver vs. seen as a cost center.
For more detail about VMware Big Data Extensions, please see this datasheet from VMware Inc. =
Kind Regards,
Doddi Priyambodo

Oracle Real Application Cluster Pros-Cons Analysis on vSphere

There are several considerations, whether we want  to implement Oracle Real Application Cluster or not in vSphere environment. These are some simple writings of the Pros and Cons analysis.

Pros Analysis – Oracle RAC on vSphere Cons Analysis – Oracle RAC on vSphere
Availability perspective: It will create zero downtime of availability (but VMware already has VMware HA features) if customer thinks that the VMware HA feature is good enough (approximately 5 minutes RTO) then no need to consider RAC for availability option. Cost perspective: customer need to purchase additional licenses for Oracle RAC capabilities for each cores of the servers in the database cluster.
Performance perspective: it “might” help the database performance if needed. But, some DBs can have better performance with RAC, some don’t (ex: batch processing intensive application). It depends on the architecture of the application itself (need to be tested). Manageability perspective: It will create additional complex things to manage (such as oracle clusterware, ASM disks, and additional RAC processes).
Recoverability perspective: it will create zero downtime experience, if the failure is happening on the host. But, if the failure is happening on the shared storage connection then recovery process need to be conducted from backup or disaster recovery mechanism. Resource perspective: customer will need to create min 2 VMs for each DB in different ESXi hosts for full capability of RAC, anti-affinity should be configured so the VMs won’t start in the same host.

So, basically the decision will be on your hand. Whether you are willing to “pay the price” for the features that you “need”. Ask the question again: do you really need the features?
Kind Regards,
Doddi Priyambodo

Pengetesan Performance untuk Oracle Database (Oracle DB Stress Test)

Pengetesan performance untuk Oracle Database sering diperlukan untuk melakukan benchmark antara system yang ada, atau jika kita ingin mengganti ke system yang baru. Kita tidak ingin performance dengan system yang baru akan lebih buruk dengan system yang lama kan.
Berikut ini adalah beberapa cara yang biasa digunakan untuk melakukan pengetesan performance tersebut, selain dari mekanisme dibawah ini ada beberapa cara lain dengan menggunakan beberapa tools lain.
Stress test biasanya dilakukan oleh Application Team dan juga Oracle Database Administrator, dan didampingi oleh Infrastructure Administrator (Servers, Network, Storage)
Tuning dari semua sisi perlu dilakukan untuk memastikan  bahwa system yang di-test berjalan dengan baik, hal ini tidak bisa hanya dilihat dari satu sisi saja (ex: applications, middleware, database, operating system, servers, storage, network, firewalls, routers, dll).

  • Menggunakan tool SwingBench OLTP/DSS kits, Dell Quest Benchmark Factory – Ini adalah tool yang common di Oracle community untuk melakukan pengetesan workloads untuk OLTP (Online Transactions Processing) atau OLAP (Online Analytical Processing).
  • Menggunakan tool pengetesan workloads dari aplikasi menggunakan tool seperti HP Load Runner, IBM Rational Performance Tester, Apache JMeter, yang dibangun oleh pemilik aplikasi (karena harus mengetahui logic dari aplikasi untuk pembuatan test plan-nya). Ini adalah mekanisme yang direkomendasikan, tetapi membutuhkan effort yang lebih besar.
  • Melakukan pengetesan via Storage Benchmark Tool – SAN membutuhkan firmware upgrade, host drivers update, re-cabling, dan perubahan lain. Perubahan ini kadang dapat menyebabkan performance issues. Sebaiknya kita membuat I/O baseline terlebih dahulu dengan menggunakan beberapa tools ini :
    • Iometer
    • Linux/UNIX dd
    • Oracle ORION

Kind Regards,
Doddi Priyambodo

Cloning Microsoft SQL Server from a Template

SQL Server
There are several things that we need to do to deploy Microsoft SQL Server database using standardized template.
The items can be read fully from these blogs :

So, the options are :

  1. Run several PowerCLI scripts after the deployment (execute via SysPrep or vRealize Orchestrator)
  2. Use VMware Application Service and automate the deployment via vRealize Automation

Kind Regards,
Doddi Priyambodo

Fight the FUD – Oracle Licensing and Support on VMware vSphere

In this post I will copy-paste one of the Best reading for Virtualizing Business Critical Applications from Michael Webster’s blog. This can explain that Oracle Database can be virtualized in VMware! Enjoy!
These are the source :
I keep hearing stories from Customers and Prospects where Oracle appears to be trying to deceive them for the purposes of extorting more license money from them than they are legally required to pay. I also keep hearing stories of Oracle telling them they would not be supported if they virtualized their Oracle systems on VMware vSphere. This has gone on now for far too long and it’s time to fight back and stop the FUD (Fear, Uncertainty, Doubt)!
In my opinion the best way for you to prevent this situation for your company is by knowing the right questions to ask, and by knowing what your obligations are. The aim for this article is to give you the tools to pay only what you legally owe, while making the most efficient and economic use of your licenses, and get the world class support that you are used to, even in a virtualized environment on VMware vSphere. All without sacrificing availability or performance.
I’m going to start this article by quoting Dave Welch, CTO, House of Brick – “I believe in paying every penny I owe. However, beyond that, it is my discretion to who or what I donate and in what amount. I have no patience with individuals or entities that premeditate the creation of OLSA compliance issues.  I similarly have no patience with the knowing spreading of FUD by some professionals in what could be construed as extortion of funds beyond customers’ executed contractual obligations. I will continue to vigorously promote and defend the legal rights of both software vendors and their customers even if that means I induce accelerated hair loss through rapid, frequent hat swapping.” Source Jeff Browning‘s EMC Communities article – Comments by Dave Welch of House of Brick on Oracle on VMware Licensing.
I agree with Dave on this. So I am going to show you how you can pay what you owe, while using what you pay for as efficiently and cost effectively as possible, and show you how you can still enjoy the full support you are entitled to. Without the scaremongering that sometimes accompanies discussions with Oracle Sales Reps.
For those that aren’t familiar with the term FUD, it is an acronym which stands for Fear, Uncertainty and Doubt. Something some companies and professionals seem to go to great lengths to create in the minds of customers.
FUD #1 – Oracle Licensing and Soft Partitioning
Oracle’s Server/Hardware Partitioning document outlines the different types of partitioning and how they impact licensing. Oracle may try and tell you that licensing a VMware environment will be more expensive as they don’t consider VMware Hard Partitioning. This is complete rubbish. This assertion is completely irrelevant unless you were only planning on deploying a single small database on a very small subset of a very large server. In this case you probably wouldn’t be using Enterprise Edition and may not be paying per CPU Core (Named User Plus instead). Why would you deploy such a system when you could easily purchase a server that is the right size for the job and licensed appropriately for the job? There is absolutely no requirement to run Oracle Enterprise Edition just because you are virtualizing your databases.
There is absolutely no increase in licensing costs over and above what you would have to pay for the same physical infrastructure to run your Oracle Database if you were running it in the OS without virtualization. You still have to pay what you owe, for what you use. The truth is that your costs could actually be significantly less when virtualizing on VMware vSphere as you can get more productive work done for the same amount of physical hardware, and therefore the license requirements and your costs will be significantly less. This is because you can run multiple Oracle databases on the same server and effectively share the resources, including memory, provided you take care during your design to ensure any undesirable performance impacts are avoided.  Take this image for example showing consolidating two dissimilar workloads on the same hardware (Source: VMware).
Continue reading Fight the FUD – Oracle Licensing and Support on VMware vSphere

Oracle Database 11gR2 Performance Tuning di AIX 7.1

Ada beberapa mekanisme untuk melakukan tuning Oracle Database, diantaranya adalah :
1. Oracle with Default Tuning (only Pre-Req, SGA and PGA, or AMM)
2. (+) Tuning OS
3. (+) Tuning Oracle Parameter (spfile / init.ora)
4. (+) Tuning Oracle Datafiles configuration, etc
5. (+) Tuning SQL Query (includes: index, partitioning, profiling, hints, etc)
6. (+) Tuning Hardware (Storage configuration, Network configuration, etc)
1). Oracle with Default Tuning
Sudah pernah dibahas di posting sebelumnya.
2,3). Tuning OS AIX dan Tuning Oracle Parameter
Beberapa sub bagian yang bisa dituning, yaitu :
1. Memory Tuning
2. CPU Tuning
3. I/O Tuning
4. Network Tuning
Continue reading Oracle Database 11gR2 Performance Tuning di AIX 7.1

Instalasi PostgreSQL Database di RedHat (all) dan Instalasi EnterpriseDB Database di RedHat for Power

Pada posting kali ini, saya akan membahas lanjutan dari posting sebelumnya mengenai PGBENCH untuk stress test.
Kali ini saya akan membahas 2 buah topik mengenai instalasi dan konfigurasi untuk :
1. PostgreSQL Database di REDHAT (x86, ppc)
2. EnterpriseDB Database di REDHAT (ppc)
atau bisa juga menggunakan GUI yang bisa di download dari website enterprisedb.
Anyway, jika ingin melakukan instalasi menggunakan command lines… Here we go…
1. PostgreSQL Database di REDHAT (x86, ppc)
– Instalasi akan dilakukan dengan menggunakan paket yum install agar lebih mudah.
Mengenai cara penggunaan paket yum, sebaiknya kita buat dulu repository local-nya (caranya bisa di search di blog ini dengan kata kunci “yum” di kolom search sebelah).
– Paket yang harus diinstall adalah =
# yum install postgresql postgresql-server postgresql-contrib
Continue reading Instalasi PostgreSQL Database di RedHat (all) dan Instalasi EnterpriseDB Database di RedHat for Power

Cara Melakukan Stress Test Database PostgreSQL atau EnterpriseDB menggunakan pgbench

Berikut ini adalah langkah demi langkah untuk melakukan Stress Test database PostgreSQL dan EnterpriseDB di LINUX RedHat.
1. Jalankan PostgreSQL atau EnterpriseDB
2. Download paket pgbench (
3. Unzip pgbench
4. Change Owner dari direktori hasil ekstrak ke enterprisedb:enterprisedb, jika tadi anda melakukan ekstraksi menggunakan user root
5. login menggunakan user enterprisedb atau postgres (tergantung)
6. $ export PATH=$PATH:/opt/PostgresPlus/9.2AS/bin
Agar tidak ditanyakan terus mengenai password dari user enterprisedb/postgres, maka Continue reading Cara Melakukan Stress Test Database PostgreSQL atau EnterpriseDB menggunakan pgbench

Oracle RMAN Cheat Sheet

CONFIGURE SNAPSHOT CONTROLFILE NAME TO ‘/u01/app/oracle/product/11.1/dbs/snapcf_ibm.f’; # default
RMAN> backup database plus Continue reading Oracle RMAN Cheat Sheet

Cara Singkat untuk Instalasi Oracle11gR2 on AIX 7.1

Berikut ini adalah mekanisme untuk melakukan Instalasi Oracle11gR2 di AIX 7.1. Mekanisme untuk instalasi Oracle di AIX ini lebih simple daripada instalasi di RedHat atau SUSE (
Anyway, here you go : (saya belum sempat rapihkan tulisannya, jadi silahkan disimak dengan baik2)
1. Check real memory dan processor :
/usr/sbin/lsattr -E -l sys0 -a realmem
prtconf  | grep proc
2. Check swap dan tmp :
/usr/sbin/lsps -a
df -gh
if RAM Between 1 GB and 2 GB then Swap Space 1.5 times the size of the RAM
if RAM Between 2 GB and 16 GB then Equal to the size of the RAM
if RAM More than 16 GB then 16 GB
# df -g
perubahan swap caranya :
Determine the current amount of paging space available to the server by issuing the following command. Continue reading Cara Singkat untuk Instalasi Oracle11gR2 on AIX 7.1

Understanding ORACLE AWR Repor

Understanding AWR Report

Posted by Mich Talebzadeh in Oracle.

In the post Automatic Workload Repository  (AWR) Performance Monitoring Tool Basics , I described the basic set up and report generation for AWR. In this post we will try to understand the AWR report itself.
Before going further I must emphasise that this report was generated by running a PL/SQL block immediately after the instance was rebooted. The code was  used to simulate a typical OLTP workload with frequent insert/update/deletes and commits. The sample code:

  • Performs checkpoints immediately before and after PL/SQL block
  • Manually takes AWR snapshots before and after running PL/SQL block

The code is shown below

  type ObjIdArray is table of tdash.object_id%TYPE index by binary_integer;
  l_ids objIdArray;
  CURSOR c IS SELECT object_id FROM tdash;
  OPEN c;
      FORALL rs in 1 .. l_ids.COUNT
        UPDATE testwrites
          SET PADDING1 =  RPAD('y',4000,'y')
        WHERE object_id = l_ids(rs);
      FORALL rs in 1 .. l_ids.COUNT
        DELETE FROM testwrites
        WHERE object_id = l_ids(rs);
      FORALL rs in 1 .. l_ids.COUNT
       INSERT INTO testwrites
       SELECT * FROM tdash t WHERE t.object_id = l_ids(rs);
        DBMS_OUTPUT.PUT_LINE('Transaction failed');
  CLOSE c;

The output from the AWR report is shown below.
The snapshot details
Continue reading Understanding ORACLE AWR Repor