Japanese Language an Introduction

Everyone of you already know most of the people in the world started learning Japanese. Everyone might have said it is difficult to learn before started learning and once you started learning you will love it. Yes it is the case with me too. Japan is the place where I lived more than where I was born with respect to work. I am not very good at Japanese language. I started learning language in work environment and from my friends here in Japan. There are different way to learn it same like other languages.

 

1. Speak with Japanese/Japanese known people and learn it. (Good way)

2. Watching Japanese Drama/Movie and learn it. (Slow and steady)

3. Learn with private classes and learn it. (Expensive but worth)

4. Learn with public classes offered by volunteers in Japan ward/City office (Cheap and Best Way)

There are many reasons to learn Japanese. What I know based on my personal experience.

1. Earn money than your place.

2. Live in the place which is most peaceful than any other part of the world.

3. Live with the people who give priority to humanity than money.

4. Live with the Govt. which is well designed for people whether they have money or not.

My personal experience is learn by experience is easy way. Start mingle with Japanese friends (Yes they are the best friends and no limit by age) and talk with them with what you know. Recently Japanese are learning English and they can very well communicate in English. As you already know they are well planned, scheduled and occupied. You should mingle with them and adjust your schedule accordingly or schedule with them in advance. My Learning was more with elder people who will guide you more proper way and do’s and don’ts.

I will try to write up what I can from information I have and collected online for the beginners. Please spend time on it and learn it. See you soon in Japan.

What is Data Virtualization

What is Data Virtualization?

Data virtualization is a virtual layer between data sources and data consumers. In simple farm it is a mediator between various data sources and data consumers. It delivers a simplified, unified, and integrated view of trusted business data in real time or near real time as needed by the consuming applications, processes, analytics, or business users. Data virtualization integrates data from disparate sources, locations and formats, without replicating the data,  to create a single “virtual” data layer that delivers unified data services to support multiple applications and users. The result is faster access to all data, less replication and cost, more agility to change.

What is Data virtualization layer does?
1. Unified data governance & security –  All data is made discoverable and integratable easily through a single virtual layer which expose redundancy and quality issues faster. While they are addressed, data virtualization imposes data model governance and security from source to output data services, and consistency in integration and data quality rules.
2. Agile data services provisioning – Data virtualization promotes the API economy. Any primary, derived, integrated or virtual data source can be made accessible in a different format or protocol than the original, with controlled access in a matter of minutes.
3. Logical abstraction and decoupling – Disparate data sources, middleware, and consuming applications that use or expect specific platforms and interfaces, formats, schema, security protocols, query paradigms and other idiosyncrasies can now interact easily through data virtualization.
4. Data federation – Data federation is a subset of data virtualization, but now enhanced with more intelligent real-time query optimization, caching, in-memory and hybrid strategies that are automatically (or manually) chosen based on source constraints, application need, network awareness.
 5. Semantic integration of structured & unstructured – Data virtualization is one of the few technologies that bridge the semantic understanding of unstructured and web data with the schema-based understanding of structured data to enable integration and data quality improvements.

How does Data Virtualization Architecture Looks like?

Data Virtualization Platfarms available
Actifio Copy Data Virtualization
Capsenta’s Ultrawrap Platform [9]
Cisco Data Virtualization (formerly Composite Software)
Delphix Data Virtualization Platform
Denodo Platform
DataVirtuality
Data Virtualization Platform
HiperFabric Data Virtualization and Integration
Querona [10]
Stone Bond Technologies Enterprise Enabler Data Virtualization Platform – http://www.stonebond.com
Red Hat JBoss Enterprise Application Platform Data Virtualization
Veritas Provisioning File System / Data Virtualization Veritas_Technologies
XAware Data Services

Let us not confuse with virtualization and functionalities with the following.
1. Data Visualization is different from data virtualization – Though looks similar these two are very different. Data visualization is used to view the data from the Data virtualization layer.
2. Data Virtualization is not a data store – This layer will not replicate or duplicate the source data. Rather data virtualization layer stores only meta data from the data sources.

In Upcoming discussions I am planning setup the lab at home and try the Redhat Virtualization and Denodo Platform. Will keep you all updated here

How to Waiting for table metadata lock on Truncate Table

How to Waiting for table metadata lock on TRUNCATE TABLE?

Problem: In MYSQL there is a long running session which is waiting for “TRUNCATE TABLE XXXXXXX”

Cause: You can check the running active sessions by show full processlist; or using any explorer tools like MYSQL Workbench. To confirm which transaction is holding the truncate table. You can execute the following command
mysql> show engine innodb status;

This will have large output but look for some transaction at the end shows waiting almost same time

Sample output portion of innodb status
—TRANSACTION 7490, ACTIVE 3047 secMySQL thread id 189, OS thread handle 0x7f03be9fb700, query id 3840 10.0.2.1 root cleaning upTrx read view will not see trx with id >= 7491, sees < 7491

Solution:

The above statment clealy says thread id 189 is holding the the transaction. Please check the MYSQL thread id 189 and take necessary action. worst case kill the session 189 will resolve your problem
mysql> KILL 189;

Note: If you are running on old version then there could be bug also. Refer the below notehttps://bugs.mysql.com/bug.php?id=61935

How to Install MongoDB on CentOS

1. Create a Virtual Machine in your Laptop with CentOS 6.x
1.1 Create a mongodb directory under /opt to install mongodb

[root@localhost opt]# ls -ltr
total 4
drwxr-xr-x. 2 root root 4096 May 15 22:19 mongodb
[root@localhost opt]#

[root@localhost opt]# cat /etc/redhat-release
CentOS Linux release 6.0 (Final)
[root@localhost opt]#

1.2 Download MongoDB
[root@localhost opt]# curl -O https://fastdl.mongodb.org/linux/mongodb-linux-x86_64-3.4.4.tgz
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
75 82.4M 75 62.3M 0 0 2909k 0 0:00:29 0:00:21 0:00:08 3405k

Note: Check my Link how to bridge your laptop internet to the VM created on Your Laptop
2. Extract the files from the downloaded archive.
tar -zxvf mongodb-linux-x86_64-3.4.4.tgz
3. Copy the extracted archive to the target directory.
cp -R -n mongodb-linux-x86_64-3.4.4/ /opt/mongodb
4. Confirm the Binaries are extracted

[root@localhost mongodb-linux-x86_64-3.4.4]# ls -ltr
total 120
-rw-r–r–. 1 root root 55625 May 15 22:22 THIRD-PARTY-NOTICES
-rw-r–r–. 1 root root 1359 May 15 22:22 README
-rw-r–r–. 1 root root 16726 May 15 22:22 MPL-2
-rw-r–r–. 1 root root 34520 May 15 22:22 GNU-AGPL-3.0
drwxr-xr-x. 2 root root 4096 May 15 22:22 bin
[root@localhost mongodb-linux-x86_64-3.4.4]# pwd
/opt/mongodb/mongodb-linux-x86_64-3.4.4
[root@localhost mongodb-linux-x86_64-3.4.4]#

5. Ensure the location of the binaries is in the PATH variable.

The MongoDB binaries are in the bin/ directory of the archive. To ensure that the binaries are in your PATH, you can modify your PATH.
export PATH=<mongodb-install-directory>/bin:$PATH
export PATH=/opt/mongodb/mongodb-linux-x86_64-3.4.4/bin:$PATH
[root@localhost ~]# cat .bash_profile
# .bash_profile

# Get the aliases and functions
if [ -f ~/.bashrc ]; then
. ~/.bashrc
fi

# User specific environment and startup programs

PATH=$PATH:$HOME/bin

export PATH
export PATH=/opt/mongodb/mongodb-linux-x86_64-3.4.4/bin:$PATH
[root@localhost ~]#
6. Logout and Login again or reload the profile to get PATH variable updated

[root@localhost ~]# echo $PATH
/opt/mongodb/mongodb-linux-x86_64-3.4.4/bin:/usr/local/sbin:/usr/local/bin:/sbin:/bin:/usr/sbin:/usr/bin:/root/bin
[root@localhost ~]#
7. Run MongoDB Community Edition

7.1 Create the data directory.
[root@localhost opt]# mkdir db
[root@localhost opt]# cd db
[root@localhost db]# pwd
/opt/db
7.2 Set permissions for the data directory.

7.3 Run MongoDB

[root@localhost ~]# env |grep PATH
PATH=/mongodb/mongodb-linux-x86_64-3.4.4/bin:/usr/kerberos/sbin:/usr/kerberos/bin:/usr/local/sbin:/usr/local/bin:/sbin:/bin:/usr/sbin:/usr/bin:/root/bin
[root@localhost ~]#

mongod –dbpath <path to data directory>
mongod –dbpath /opt/db
[root@localhost db]# mongod –dbpath /opt/db
2017-05-15T22:27:04.770+0200 I CONTROL [initandlisten] MongoDB starting : pid=1267 port=27017 dbpath=/opt/db 64-bit host=localh ost.localdomain
2017-05-15T22:27:04.770+0200 I CONTROL [initandlisten] db version v3.4.4
2017-05-15T22:27:04.770+0200 I CONTROL [initandlisten] git version: 888390515874a9debd1b6c5d36559ca86b44babd
2017-05-15T22:27:04.770+0200 I CONTROL [initandlisten] allocator: tcmalloc
2017-05-15T22:27:04.770+0200 I CONTROL [initandlisten] modules: none
2017-05-15T22:27:04.770+0200 I CONTROL [initandlisten] build environment:
2017-05-15T22:27:04.770+0200 I CONTROL [initandlisten] distarch: x86_64
2017-05-15T22:27:04.770+0200 I CONTROL [initandlisten] target_arch: x86_64
2017-05-15T22:27:04.770+0200 I CONTROL [initandlisten] options: { storage: { dbPath: “/opt/db” } }
2017-05-15T22:27:04.844+0200 I STORAGE [initandlisten]
2017-05-15T22:27:04.844+0200 I STORAGE [initandlisten] ** WARNING: Using the XFS filesystem is strongly recommended with the Wi redTiger storage engine

 

Yes your MongoDB is up and Running now!!!!

 

How to add bridge network

Problem Statement:
You might have created VM in your laptop which is running on different network segment 10.x.x.x. your laptop is running on wired/wireless network on different segment 192.168.x.x. Now we need to bridge these two network to transfer files and access internet. How to achieve it?
Solution:
In your VM add another network interface as bridged network device name is eth1 and make a note of HWaddr.
Reboot the VM generally allow you to bring eth0 and eth1 both interfaces on by default. now you can access your laptop/pc through the second interface IP segment will be same as your Laptop/PC segement

if not we can manually add the device file and update as below

[root@localhost network-scripts]# pwd
/etc/sysconfig/network-scripts
[root@localhost network-scripts]#

[root@localhost network-scripts]# cat ifcfg-eth0
DEVICE=”eth0″
BOOTPROTO=”dhcp”
HWADDR=”08:00:27:yy:yy:yy”
NM_CONTROLLED=”yes”
ONBOOT=”yes”
[root@localhost network-scripts]# cat ifcfg-eth1
DEVICE=”eth1″
HWADDR=”08:00:27:xx:xx:xx”
ONBOOT=”yes”
NM_CONTROLLED=”yes”
BOOTPROTO=”dhcp”
[root@localhost network-scripts]#
eth0 Link encap:Ethernet HWaddr 08:00:27:yy:yy:yy
inet addr:10.0.2.15 Bcast:10.0.2.255 Mask:255.255.255.0

eth1 Link encap:Ethernet HWaddr 08:00:27:xx:xx:xx
inet addr:192.168.100.106 Bcast:192.168.100.255 Mask:255.255.255.0

How to Invalid Partition Indexes

Indexes will become invalid for various reason. When indexes are invalid your query may not perform well and sometime it will lead into your operation fail during few DML. So it is DBA responsibility periodically check and rebuild during off business hours.

1. How to find the Invalid Indexes
Indexes can be from normal tables or partitioned/subpartitioned tables respectively we will have NONPARTITIONED INDEXES and PARTITIONED INDEXES/SUBPARTITIONED INDEXES

We have three different views to check for respective Indexes.
1.1 DBA_INDEXES
1.2 DBA_IND_PARTITIONS
1.3 DBA_IND_SUBPARTITIONS

You can use the STATUS of these views to find the invalid Indexes. Status with VALID, USABLE and N/A is good.

you have have to write simple SQL using the all three with proper subtitle and run it periodically.

——Snip of Invalid Index SQL – Start——–

set linesize 200
set pagesize 200
col owner format a20
col index_owner format a20
col index_name format a30
col parition_name format a30
col subpartition_name format a30
col status format a10

prompt —– Invalid NONPARTITIONED Indexes
select owner,index_name,status from dba_indexes where
status not in (‘VALID’,’USABLE’,’N/A’);
prompt —– Invalid PARTITIONED Indexes
select index_owner,index_name,parition_name,status from
dba_indexes where status not in (‘VALID’,’USABLE’,’N/A’);
prompt —– Invalid SUBPARTITIONED Indexes
select index_owner,index_name,parition_name,subpartition_name,status
from dba_indexes where status not in (‘VALID’,’USABLE’,’N/A’);

——Snip of Invalid Index SQL – End——–

2. How to fix the Invalid Indexes:

Once invalid indexes are identified we have to rebuild

We can use REBUILD to rebuild the indexes. We can use below keywords appropriately to rebuild indexes and safer manner

2.1 PARALLEL
2.2 NOLOGGING
2.3 ONLINE

Note: Please be careful while doing rebuilding indexes will make some current operation on hold/abort so better you check before doing rebuild operation and I would recommend you to do it during off business hours until unless you need it as emergency. If need to be done during business hours please make sure users aware that you are doing it.

NONPARTITIONED indexes rebuild:

ALTER INDEX <OWNER>.<INDEX_NAME> REBUILD NOLOGGING;

PARTITIONED indexes rebuild:

ALTER INDEX <OWNER>.<INDEX_NAME> REBUILD PARTITION <PARTITION_NAME> PARALLEL NOLOGGING;

How to Advance Compression 12c

1. What is Advance Compression:

As you already know in oracle compression started playing major role these days on saving cost and improving performance.

Oracle Advanced Compression option help to improve performance while reducing storage costs. It allows significantly reduce overall database storage footprint by enabling compression for all types of data –relational (table), unstructured (file), index, network and backup data.
Oracle Advanced Compression option are designed to improve performance for all components of your IT infrastructure, including memory, network bandwidth and storage.

2. Benefits of Advanced Row Compression

1. Reduce the storage space consumption by a factor of 2x to 4x by using Advanced Row Compression.
2. 2.5x faster on table scan compare with uncompressed data
3. Significant advantage is to read compressed blocks (data and indexes) directly, in memory, without uncompressing the blocks. This helps improve performance due to the reduction in I/O, and the reduction in system calls related to the I/O operations. Further, the buffer cache becomes more
efficient by storing more data without having to add memory.

3. Minimal Performance Overhead

1. A newly initialized block remains uncompressed until data in the block reaches an internally controlled threshold. When a transaction causes the data in the block to reach this threshold, all contents of the block are compressed. Subsequently, as more data is added to the block and the threshold is again reached, the entire block is recompressed to achieve the highest level of compression. This process repeats until Oracle determines that the block can no longer benefit from further compression. Only the transaction that performs the compression of the block will experience the slight compression overhead – the majority of DML transactions on compressed blocks will have the exact same performance as they would with uncompressed blocks.
2. Less than 3% cpu overhead on compressed data

4. Advanced Row Compression Implementation:

4.1 For new tables and partitions, enabling Advanced Row Compression is easy: simply CREATE the table or partition and specify “ROW
STORE COMPRESS ADVANCED”
Example:
CREATE TABLE emp (emp_id NUMBER, first_name VARCHAR2(128), last_name VARCHAR2(128)) ROW STORE COMPRESS ADVANCED;

4.2 For existing tables and partitions, there are a number of recommended approaches to enabling Advanced Row Compression:
4.2.1. ALTER TABLE … ROW STORE COMPRESS ADVANCED
Enable Advanced Row Compression for all future DML — however, the existing data in
the table will remain uncompressed.

4.2.2. Online Redefinition (DBMS_REDEFINITION)
Enable Advanced Row Compression for future DML and also compress existing data.
DBMS_REDEFINITION keeps the table online for both read/write activity during the migration. Run DBMS_REDEFINITION in parallel for best performance.
There is no interruption in the use of the indexes during, or after.
The only exception is when online redefinition is used for redefining a partition — any global indexes are invalidated and need to be rebuilt

4.2.3. ALTER TABLE … MOVE ROW STORE COMPRESS ADVANCED
Enable Advanced Row Compression for future DML and also compress existing data.
While the table is being moved it is online for read activity but has an exclusive (X)
lock – so all DML will be blocked until the move command completes. Run ALTER
TABLE…MOVE in parallel for best performance.
ALTER TABLE… MOVE will invalidate any indexes on the partition or table; those indexes will
need to be rebuilt after the ALTER TABLE… MOVE.

4.2.4. ALTER TABLE … MOVE TABLE/PARTITION/SUBPARTITION … ONLINE
Enable Advanced Row Compression for future DML and also compress existing data. ALTER TABLE … MOVE TABLE/PARTITION/SUBPARTITION … ONLINE allows
DML operations to continue to run uninterrupted on the table/partition/subpartition that is being
moved. Indexes are maintained during the move operation, so a manual index rebuild is not
required. New in 12.2 on Oracle Cloud, move tables online as well as partitions/subpartitions.

5. Advanced Index Compression
OLTP Databases will have more indexes to support the multitude of access paths for OLTP applications, this can cause indexes to contribute a greater share to overall storage of a database.

New in 12.2 on Oracle Cloud, the “HIGH” level of Advanced Index Compression provides significant space savings while also improving performance for queries that are executed using indexes. High compression offers the following advantages over LOW compression:
Gives higher compression ratios in most cases.
Employs more complex compression algorithms than advanced low compression.
Stores data in a compression unit, which is a special on-disk format.

The following is an example of enabling Advanced Index Compression (high level):

CREATE INDEX hr.emp_mndp_ix ON hr.employees(manager_id, department_id) COMPRESS ADVANCED HIGH;

Restriction on Index Compression:

Bitmap indexes can’t be compressed.
Partitioned indexes can’t be compressed. From 11g you can do compress partition index .

6. Here I tried to implement Index compression using Oracle Lab:

6.1 Create a Table with 6 columns type of varchar
CREATE TABLE test (
ENAME VARCHAR2(75),
EADD1 VARCHAR2(75),
EADD2 VARCHAR2(75),
EADD3 VARCHAR2(75),
EADD4 VARCHAR2(75),
CITY VARCHAR2(75)
);
Table created.

INSERT /*+ APPEND */ INTO test
SELECT RPAD(‘X’,75, ‘X’),
RPAD(‘X’,75, ‘X’),
RPAD(‘X’,75, ‘X’),
RPAD(‘X’,75, ‘X’),
RPAD(‘X’,75, ‘X’),
RPAD(TO_CHAR(level),75, ‘X’)
FROM dual
CONNECT BY level <= 10000;

COMMIT;
10000 rows created.
col owner format a10
col segment_name format a15
col segment_type format a15
select owner,segment_name,segment_type,bytes from dba_segments where owner=’MUDHALVAN’;
OWNER SEGMENT_NAME SEGMENT_TYPE BYTES
———- ————— ————— ———-
MUDHALVAN TEST TABLE 6291456

6.2 Created Non-Unique Index without compression and checked the size of an Index

SQL> CREATE INDEX test_idx ON test(ENAME, EADD1, EADD2, EADD3, EADD4, CITY);

Index created.

SQL> EXEC DBMS_STATS.gather_index_stats(USER, ‘test_idx’);

PL/SQL procedure successfully completed.

SQL> select owner,segment_name,segment_type,bytes from dba_segments where owner=’MUDHALVAN’;

OWNER SEGMENT_NAME SEGMENT_TYPE BYTES
———- ————— ————— ———-
MUDHALVAN TEST_IDX INDEX 6291456
MUDHALVAN TEST TABLE 6291456

SQL>

6.3 Created Non-Unique Index with compression and checked the size of an Index

SQL> CREATE INDEX test_idx ON test(ENAME, EADD1, EADD2, EADD3, EADD4, CITY) COMPRESS 5;

Index created.

SQL>EXEC DBMS_STATS.gather_index_stats(USER, ‘test_idx’);

PL/SQL procedure successfully completed.

SQL> select owner,segment_name,segment_type,bytes from dba_segments where owner=’MUDHALVAN’;

OWNER SEGMENT_NAME SEGMENT_TYPE BYTES
———- ————— ————— ———-
MUDHALVAN TEST_IDX INDEX 2097152
MUDHALVAN TEST TABLE 6291456

SQL>

Note : You can see the size of an index is reduced more than 2.5x

6.4 Created Unique Index without compression and checked the size of an Index

SQL> CREATE UNIQUE INDEX test_idx ON test(ENAME, EADD1, EADD2, EADD3, EADD4, CITY);

Index created.

SQL> EXEC DBMS_STATS.gather_index_stats(USER, ‘test_idx’);

PL/SQL procedure successfully completed.

SQL> select owner,segment_name,segment_type,bytes from dba_segments where owner=’MUDHALVAN’;

OWNER SEGMENT_NAME SEGMENT_TYPE BYTES
———- ————— ————— ———-
MUDHALVAN TEST_IDX INDEX 6291456
MUDHALVAN TEST TABLE 6291456

SQL>

6.5 Created Unique Index with compression and checked the size of an Index

SQL> CREATE UNIQUE INDEX test_idx ON test(ENAME, EADD1, EADD2, EADD3, EADD4, CITY) COMPRESS 5;

Index created.

SQL> EXEC DBMS_STATS.gather_index_stats(USER, ‘test_idx’);

PL/SQL procedure successfully completed.

SQL> select owner,segment_name,segment_type,bytes from dba_segments where owner=’MUDHALVAN’;

OWNER SEGMENT_NAME SEGMENT_TYPE BYTES
———- ————— ————— ———-
MUDHALVAN TEST_IDX INDEX 2097152
MUDHALVAN TEST TABLE 6291456

SQL>

6.6 Created Unique Index with compression advanced and checked the size of an Index

SQL> CREATE UNIQUE INDEX test_idx ON test(ENAME, EADD1, EADD2, EADD3, EADD4, CITY) COMPRESS ADVANCED LOW;

Index created.

SQL> EXEC DBMS_STATS.gather_index_stats(USER, ‘test_idx’);

PL/SQL procedure successfully completed.

SQL> select owner,segment_name,segment_type,bytes from dba_segments where owner=’MUDHALVAN’;

OWNER SEGMENT_NAME SEGMENT_TYPE BYTES
———- ————— ————— ———-
MUDHALVAN TEST_IDX INDEX 2097152
MUDHALVAN TEST TABLE 6291456

SQL>

How to Import Oracle 12C Dump into 11g Database

What is the scenario:
Exported the database using 12c expdp. Now we have to import into 11g Rel 2 database.

What Command you used:
impdp system/system directory=backup dumpfile=test12c.dmp logfile=test12.log schemas=test version=11.2

What Error message you get:
ORA-39142 incompatible version

Solution:
impdp system/system directory=backup dumpfile=test12c.dmp logfile=test12.log schemas=test version=latest

Note: You can also do expdp again with version=11.2 and do direct impdp on target database.

How to Oracle VMBox VMs access from your local network

How to Oracle VMBox VMs access from your local network:

Why:
VM created in Oracle VMBox can not be accessed from your local network. It is because the virtual machine created under VM will have their own sub netmask something like 10.2.0.x

Your home computer network will be in different network segments then you will not be able to access directly from your computer using ssh/telnet. In order to access that you can add VM box machine also into your network

How:
To add the VMBox machine into your local network. You should add an another virtual network adapter in VM and bridge it to your home computer network

1. Open Oracle VM Box
2. Right click on VM
3. Select Settings.
4. choose Network
5. Now you can see only Adapter 1 is checked.
6. Click on Adapter 2 Tab. Check Enable Network Adapter.
7. Attached to : choose Bridge Network from Drop Down
8. Name : choose your home computer Network Adapter
9. Press OK to save

Now Start the virtual machine and you can see two adapters enabled and IP assigned. First adapter with VMBox network segment and Second adapter with your home computer network IP like below.

If you want to see how it can be I created a video and posted in You Tube subscribe and watch it

https://www.youtube.com/channel/UCcA0dQDxgTwfM1bNMP3hHBA

enjoy!!!

How to add posts in pages

What is Add Post in Page?
I was struggling these many days as in WordPress whatever you add new post will default go to Home page only. So you my juniors and friends would like to see category related posts on pages was very difficult or they have to do search in my home page. You may wanted to created multiple pages based on the categories of posts or content you wish to post. With default wordpress settings you will not be able to do it.
How can we achieve it?
You have a plugin provided by wordpress or some nice authors. Go to your worpress admin dashboard and search for the plugin post in page which will bring you multiple plugins. You can choose anyone and follow the instructions to install and enable it.
How it works?
Very simple you have to do only two steps.
1. Install the plugin which you choose
2. Activate the plugin
3. Add a simple code in each page with your category name you wanted to displayed on that page. (Category will have clug name which is internally created by wordpress which is always small character of your category)
Sample:
I choosed “Posts for Page Plugin” and installed
Here is the code I added in my Oracle page as I have category Oracle with slug name as oracle.
posts-for-page cat_slug=’oracle’ hide_images=’false’ num=’5′ read_more=’
Read More »’ show_full_posts=’false’ use_wp_excerpt=’true’ strip_html=’true’ hide_post_content=’false’ show_meta=’true’

Now I am happy and able to see the posts based on categories
Note: Due to website security reason will delete the anonymous comments and users. If you are geniue user please email or leave a comment breifiely so that I can enable the comment and users. I know it is hard but please help to understand. Thanks in advance

1 4 5 6 7