mongo db startup init.d script on CentOS/Redhat

Here we are going to create init script for mongodb, so that mongodb can start at the time of boot. So follow below steps to configure mongodb startup script.


(1) Create a file /etc/init.d/mongodb with below content. You may need to change path of mongod. In below script the path is "/opt/mongo/bin/mongod". But you need to change according to your installation of mongodb. Also you may need to change "mongodb_user". This will run mongo db as specified user.
(2) Now make it executable using below command.

chmod a+x /etc/init.d/mongodb

(3) Now you need to make it runable in runlevel 3,4 & 5. So use below command.

chkconfig --level 345 mongodb on

chkconfig --list mongodb

You can see mongodb service is configured to run at runlevel 3 4 & 5.

(4) Once you are done with all above 3 steps, you can start & stop mongodb service.
Note: please make sure to kill the currently running mongodb process. 

Skype Audio Sound issue in Ubuntu 13.10

I am trying to remove total dependency on Windows 7 by using Open Source Ubuntu 13.10. Ubuntu 13.10 (Saucy Salamander) seems quite stable. Except two issue I faced recently with Skype audio problem and Vmware Player(incompatible with kernel 3.1x).

To solve Skype audio issue, below command was helpful.

Installing VMware Player on Ubuntu 13.10

I mostly choose Vmware Player as virtualization platform. There is one more Virtualization tool available "VitualBox". I have used "VirtualBox" also, but due to lack of networking options I do not prefer to use "VirtualBox".

Vmware Player provides good options for networking like Bridge network, host only and Natted. I prefer to use Natted, as all Guest OS can communicate with each other, with Host OS and with Internet too.

When I started installation of VMware Player in Ubuntu 13.10, I got some errors. When I checked "dmesg" I found that there was segmentation fault error with one of the library file.

I searched on Internet, and then I found that the problem was not with any library file but with the Kernel.

Ubuntu 13.04 is having kernel 3.11.0-12-generic, while VMware do not support this kernel version. So finally searching from some of the forum I found below steps to overcome issue of VMware Player installation on Ubuntu 13.10.

First you need to download VMware player for Ubuntu from Vmware website. I got "VMware-Player-5.0.2-1031769.x86_64.bundle" for me.

Now before we start installation, we need to install prerequisites. Use below command.

Then install Vmware Player using below command.

You will see progress of installation, and it will finish. But when you start Vmware Player, it will ask to install some modules. When you click on "Install" button, it will show you "Send Crash Report".

Now follow below steps to solve your issue.

Once done, you can start VMware Player without any issue.

Hope this will help, who wants to run VMware Player in Ubuntu 13.10.

Installing Php 5.2 on CentOS/Redhat

When it comes to php installation, most of us will use simple yum utility in CentOS or RedHat Linux. But when you would try to install php using yum, you will notice that it will install latest version of php (5.3 or later) binaries.

But most of the time it has happened that Development team has asked me to install older release of php(php v. 5.2). I have spent so much time in downgrade process of php version, but finally I have figured out how to install older release in few minutes. So below process would help you to install php 5.2 in latest release of CentOS.

So before we start, lets first remove all php packages using below command. You may need to run below command 2-3 times to remove all php packages.

Once you run above command, you can check with "rpm -qa | grep php" command if still there is any php package is installed.

Now we need to get webtatic repository and install it. You can use below command to download and install webtatic repository.


Once you are done with installation, you need to fire below command to install php 5.2 in Centos.


Of course you can edit above command to install your desired php library files.

Hope this post would be helpful who wants to install php 5.2 in CentOS.

Alfresco Fetal Error: libdcpr.so+0x1330b Java Crashed

You would have seen java crash error log. This message does not show any information like due to which class or process java crashed. But finally we found that one culprit document we were uploading in Alfresco, which was having "Hindi" characters. So if you come across this kind of error, check your recently uploaded document if it is having language other than English.

Hope this will help!

#
# A fatal error has been detected by the Java Runtime Environment:
#
#  SIGSEGV (0xb) at pc=0x00007fbcd266430b, pid=8028, tid=140448967481088
#
# JRE version: 6.0_27-b07
# Java VM: Java HotSpot(TM) 64-Bit Server VM (20.2-b06 mixed mode linux-amd64 compressed oops)
# Problematic frame:
# C  [libdcpr.so+0x1330b]  signed char+0xbb
#
# An error report file with more information is saved as:
# /opt/alfresco-4.0.1/hs_err_pid8028.log
#
# If you would like to submit a bug report, please visit:
#   http://java.sun.com/webapps/bugreport/crash.jsp
# The crash happened outside the Java Virtual Machine in native code.
# See problematic frame for where to report the bug.
#

Alfresco Performance tuning by setting JVM memory parameters

Since last couple of days I was working on Alfresco's JVM tuning. I faced below issues when it was configured previously.
  • OldGen space was utilized completely
  • Per Survivor Space was too utilized completely
  • Full GC was taking time which leads to stop application responding for time being
My previous JVM settings were as given below:

export JAVA_OPTS="-Xms2048m -Xmx4096m -Xss1024k -XX:MaxPermSize=512m -XX:NewSize=1024m -XX:+UseConcMarkSweepGC -XX:+UseCodeCacheFlushing -XX:ReservedCodeCacheSize=64m -Djava.awt.headless=true -Dalfresco.home=/opt/alfresco4 -Dcom.sun.management.jmxremote -Dcom.sun.management.jmxremote.ssl=false -Dsun.security.ssl.allowUnsafeRenegotiation=true -Djava.net.preferIPv4Stack=true"

After making changes in few parameters, I got below settings which works perfectly fine. I had to increase memory allocation and tune other parameters.

export JAVA_OPTS="-Xms4096m -Xmx8192m -XX:MaxPermSize=512m -XX:NewSize=3072m -XX:+UseConcMarkSweepGC -XX:+UseCodeCacheFlushing -XX:ReservedCodeCacheSize=64m -Djava.awt.headless=true -Dalfresco.home=/opt/alfresco4 -Dcom.sun.management.jmxremote -Dcom.sun.management.jmxremote.ssl=false -Dsun.security.ssl.allowUnsafeRenegotiation=true -Djava.net.preferIPv4Stack=true"

I have taken screenshot from VisualVM.

Heap Memory and CPU Usage:

OldGen and Eden Space usage:


Linux Terminal Shortcuts

Writing long commands every time becomes tedious and time consuming too. Here I am going to list down some shortcuts which will help you to save time while using Linux terminal.

Ctrl+left & Ctrl+right

Hitting Ctrl and the left or right arrow keys jumps between arguments in your command. So, if you had a typo in the middle of the command, you could jump to it quickly with Ctrl and a few taps of the left arrow key.

Tab

This shortcut key is often used. When using bash you need to just type few characters and then pressing tab will auto complete command or file/directory name.

Ctrl + U or Ctrl + C
This shortcut will clear command written on shell.

Ctrl + R
This command also most often used. When you hit Ctrl plus R it will open up history, as you type character it will display all previous command matches with character you typed.

If you want to go through all command one by one which matches with the character you entered then press Ctrl + Shift + R.

Alt + BackSpace

This shortcut will remove the word from its current position.

!
This shortcut also used to repeat previous command. Instead of pressing Ctrl + R, you can type ! then characters to repeat from history.

You can see in below image. If I want to run "pkill skype", I have written "!pk" and the hit enter. This will run "pkill skype" command.



Microsoft SQL Server 2008 Database Mirroring

Scenario:

We have below two server's details. Both servers are in workgroup. Need to setup database mirroring using certificate based authentication. This is going to be Active-Passive mirroring. So at a time only one server(Active) server will serve Database, another DB server(passive) will not be usable as it will be in synchronization stat.

The Active DB server will be called as “Principal” server and Passive DB server will be called as “Mirror” server.

DB Servers:
Active DB IP: 10.36.100.54
Passive DB IP: 10.36.100.53
DB name: liferayprod

Prerequisites
Keep port 5022 open between two Database servers

Database Backup and Restore Procedure
(1) First take Database backup from Principal server. Follow below steps to take database backup.

(2) Select “Backup Type” full and click on Ok button.
(3) Now we need to backup transaction logs.

(4) Now go to location where you have backup database. Copy liferayprod.bak file from that location to Mirror Database.
(5) Now perform below steps on Mirror database to restore database.
(6) Create database, give name “Liferayprod”. Now do right click on that database, click on Task-> Restore → Database.
(7) Now locate database file. Do check mark as shown in figure.
(8) Now click on “Options”. Select below two options as shown in figure. Click on “Ok” button.
(9) Once restore complete, you will see database's status as shown in below figure.

Database Mirroring Queries on Principal and Mirror server

Perform below steps on Principal server:

On Active server open Query browser and fire below queries one by one. Follow the instructions given 


Before running below query create c:\certificate directory in server. If required, need to give full permission to "Everyone".

Before running below query, copy mirror server's certificate from mirror server to principal server's c:\certificate directory. 


Now wait for 10 mins before applying below query. As first time it takes time to synchronization.


Perform below steps in Mirror server:

On Passive server open Query browser and fire below queries one by one. 
Before running below query create c:\certificate directory in server. If required, need to give full permission to "Everyone".



Before running below query, copy  Principal server's certificate from principal server to mirror server's c:\certificate directory. 



Now wait for 10 mins before applying below query. As first time it takes time to synchronization.


Varify Mirroring Status

(1) Once you run queries for mirroring, DB mirroring should be started. On the mirror server you should see below mentioned status.
(2) At the principal server, you should be able to see below mentioned status.
(3) We can check mirroring status as shown in below figure. Right click on Database → Tasks → Launch Database mirroring Monitor.

Manual Failover

If you want to do manual Failover then use below query. Which will make Principal database as Mirror Database and Mirror Database->Principal database.


You will need to do manual Fail over when both databases are not connected with each other. There could be the reason like, database server it self is down or may be blocked port. So in that case you can remove database server's mirroring function. Then database will act like standalone database server. Later on you can configure mirroring once you are able to fix connectivity issue. You can use below command to remove database from mirroring.


To bring the mirror online manually after dropping mirroring, issue the following command (you can’t do it while it is still mirrored).




Note: At the time of reconfiguration, you may need to delete “master key” and endpoint through below query.

→ On Mirror server follow below steps.


→ On principal Server follow below steps.



Installing Apache Ant on linux / CentOS / Red Hat

Prerequisites:
- Make sure you have installed JAVA . If not then click here to install java.

Steps:
1. First download Ant binary packages from Apache. You may use below link to download Apache Ant.
http://apache.mirrors.tds.net//ant/binaries/apache-ant-1.9.2-bin.tar.gz

2. Keep downloaded binary file at some location. For example: /opt/apache-ant-1.9.2-bin.tar.gz.
3. Now go to /opt directory and extract zip file using below command.
cd /opt/
tar -zxvf apache-ant-1.9.2-bin.tar.gz
4. When you extract zip file it will create /opt/apache-ant-1.9.2 folder. Now we need to set ANT_HOME variable.
so open /etc/profile file and at the end of file write below lines and save it.

export ANT_HOME=/opt/apache-ant-1.9.2
export PATH=$PATH:$ANT_HOME/bin

Once you are done with /etc/profile file editing, you need to run below command to set variable's value.

source /etc/profile

5. Once you are done with above command, you can check ANT_HOME variable value with echo command.
for example:

-bash-3.2$ echo $ANT_HOME
/opt/apache-ant-1.9.2

Done.

RAID 5 Configuration on CentOS using mdadm

Why RAID?
RAID stands for Redundant Array of Inexpensive Disks or Redundant Array of Independent Disks. In RAID we join two or more harddisks and make one logical disk. Which gives below mentioned benefits. 

  • Stopping data loss, when one or more disks of the array fail.
  • Getting faster data transfers.
  • Getting the ability to change disks while the system keeps running.
  • Joining several disks to get more storage capacity; sometimes lots of cheap disks are used, rather than a more expensive one.
There are different RAID level, which we can make use of based on our requirement. I am not going into deep on level, but one of the most popular RAID level that is "RAID 5", I am going to show you here through example on CentOS.

Why RAID 5?
RAID 5 uses "striping with distributed parity" algorithm. In which each block on data will be stored in three different places. Ofcourse, RAID 5 needs atleast 3 disks. When any write request comes, it stores block on 2 disks(disk A and disk B) and on third disk(disk C) it will write parity information. On the next write request, it will store block on another 2 disk (disk B and disk C) and on third disk C it will store parity information. Like this, it continues writing parity and data blocks on all three disk. Due to parity information on each disk, we will get lesser storage space as 1/3 space will be used for parity.  
In case if any disk fails, RAID 5 will not stop. It will keep working in "downgraded mode". You will feel slow performance. Once you add another disk into RAID 5 array, it will start building array and after sometime, RAID 5 will act as usual. 

Steps to configure RAID 5 on CentOS
Prerequisites:
Additional 3 disks for RAID 5
mdadm-2.6.9-3.el5 package installed

1. Lets see how many disks are installed using "fdisk -l" command.

fdisk -l
fdisk -l
You can see here /dev/sdb, /dev/sdc and /dev/sdd is unpartitioned. We need to create partition so that it can be used in RAID 5.
2. using "fdisk" command we can create partition. So first we will create partition for /dev/sdb disk. Give command as shown below.
fdisk /dev/sdb
fdisk /dev/sdb
Once you give "fdisk /dev/sdb" command, you need to press "n", then press "p" and then press "1' for partition number. You can check partition using "p".
Print Partition using p
print partition using "p"
Now we need to change "ID" from "83" to "fd". Please see below image and follow steps. 
Change partition type
Change partition type
Now you can see partition "ID" is changed from "83" to "fd". Now we need to save changes, so press "w" as shown in below image.
save partition using w
Save partition
Now once again you can see partition status using "fdisk -l" command. You will see output as below.
show partition information
fdisk -l
you can see here /dev/sdb partition is created with "ID" as "fd".

3. Follow step #2 for /dev/sdc and /dev/sdd. Once you will create partition for all 3 disks, you will see below output.
Show partition information
Show partition information
4. Now we need to use "mdadm" command to create RAID 5 array. The command it self is self explanatory.
mdadm create partition
mdadm create partition
5. Once you create partition using mdadm command, it doesn't create RAID partition instantly, it takes time. you can see status using "mdadm --detail /dev/md0" command as shown below.
mdadm show details
mdadm show details
You can notice here "spare rebuilding" for /dev/sdd1. After some time you will see "active sync" for /dev/sdd1.
mdadm show details
mdadm show details
6. Now we need to format /dev/md0 RAID partition using mkfs.ext3. You can choose the partition type you want. Once format is done, you can mount it in /etc/fstab.

Format Partition
Format Partition
I hope this post will be helpful to configure RAID 5 in your server. 

Alfresco Backup and Restore Process

This post describes backup and restore process of Alfresco. Here I have mentioned steps to backup alfresco's content store(alf_data), lucene index data and Database.

Alfresco Backup process:
There are two methods by which we can do alfresco backup. First one is Hot backup, when alfresco is running and you want to do backup without shutting down alfresco. Second is is Cold backup, when you can shutdown alfresco and do backup. Most of people prefer doing Hot backup, as no one will prefer to shutdown alfresco everytime before doing backup and turn back on when finished. Also hot backup also gives surety of data consistency except latest lucene index. But yes, when you restore backup from hot backup set, Alfresco will rebuild Lucene indexes. So follow below steps for Alfresco backup.

Cold Backup Process:
  1. Stop Alfresco.
  2. Backup the database Alfresco is configured to use, using your database vendor's backup tools.
  3. Backup the Alfresco “dir.root” and “dir.indexes” directory. You can get location of these two paths from alfresco-global.properties file.
  4. Store both the database and Alfresco “dir.root/dir.indexes” backups together as a single unit. For example, store the backups in the same directory or compressed file.
  5. Start Alfresco.
Hot Backup Process:

It is absolutely critical that hot backups are performed in the following order:
  1. Make sure you have a "backup-lucene-indexes" folder under 'dir.root' if not specified separate path. Backup-lucene-indexes can be useful when you want to restore indexes.
  2. Backup the database Alfresco is configured to use, using your database vendor's backup tools.
  3. As soon as the database backup completes, backup specific subdirectories in the Alfresco dir.root.
Sub-directories would be as listed below.
  • contentstore
  • contentstore.deleted
  • audit.contentstore
  • backup-lucene-indexes
Finally, store both the database and Alfresco dir.root backups together as a single unit. For example, store the backups in the same directory or in a single compressed file.
Do not store the database and dir.root backups independently, as that makes it unnecessarily difficult to reconstruct a valid backup set for restoration.


'Note': Alfresco generates "backup-lucene-indexes" everyday at 3:00 am, so make sure that database backup time does not overlap with this schedule. The "backup-lucene-indexes" generation should be finished before you start the SQL backup. So if backup-lucene-indexes is running at 3:00 am then you can set database backup at 4:00 am considering that your alfresco system is not being used during this time.

Restore Process

1. Stop Alfresco.
2. Copy the existing dir.root to a temporary location.
3. Restore dir.root and dir.indexes from backup.
4. If you are restoring from a hot backup, rename <dir.indexes>/backup-lucene-indexes to <dir.root>/lucene-indexes.
5. Restore the database from the database backups.
6. Start Alfresco.

You would see in logs that alfresco is rebuilding indexing. But index rebuilding will be incremental so it wont take time.

Apache Load Balancer Configuration using mod_jk

Why Apache Load Balancer

We can use Apache Load Balancer module (mod_jk) to optimizes resource use, maximizes throughput, minimizes response time, and avoids overload as well as for auto failover.

How it can be utilized

Lets assume that you have two tomcat web applications running on two different servers. Now you want to make your application highly available and also want to distribute traffic across both tomcat application servers. So here we can configure one web server(apache) with mod_jk module, which will be act as a frontend server and two tomcat application servers will act as backend server.

Client request for your application will come to Web server(apache). Based on mod_jk configuration, apache will send request to both tomcat applications. Cool!!!!

How to configure mod_jk on Apache Web server.

1. First download mod_jk source from below mentioned link. Please choose package according to your server architecture(32bit or x64).
For Linux:
http://apache.petsads.us//tomcat/tomcat-connectors/jk/tomcat-connectors-1.2.37-src.tar.gz
For Windows:
http://apache.petsads.us//tomcat/tomcat-connectors/jk/binaries/windows/
2. If you are using windows then you need to simply copy mod_jk.so file into your apache's module directory.
If you are using linux then you will have to make mod_jk.so file using below steps.
1.1 Extract tomcat-connectors-1.2.37-src.tar.gz.
"tar -zxvf tomcat-connectors-1.2.37-src.tar.gz"
1.2 Now configure it using below command.
"./configure --with-apxs=/usr/sbin/apxs"
1.3 Now run make and then make install.
"make && make install"
Note: If you are getting any error, then please check if "httpd-devel" package is installed or not.
1.4 If above 3 commands run successfully, then it would have created mod_jk.so file into /etc/httpd/modules/ directory.

3. Now Load that module in apache's httpd.conf file using below string. You can copy that string at bottom of httpd.conf file.
#
# Load mod_jk
#
LoadModule jk_module modules/mod_jk.so

4. Now you need to specify workers.properties file path in httpd.conf file, so that apache can read configuration of both tomcat applications.

JkWorkersFile conf/workers.properties

5. you can specify log file location, log level, log format too using below string in httpd.conf file.

JkLogFile logs/mod_jk.log
JkLogLevel warn
JkLogStampFormat "[%a %b %d %H:%M:%S %Y]"

6. Now create workers.properties file using below content.

Here
- worker.jvm1.port is ajp port of tomcat application server configured in server.xml file,
- worker.jvm1.host is IP address of tomcat application server,
- worker.jvm1.type is the ajp protocol version,
- worker.jvm1.lbfactor is to assign weight to tomcat application,
- worker.jvm1.max_packet_size is to specify maximum packet size.
Please note that here we have used jvm1 and jvm2. For tomcat server 1, we have used jvm1 and for another tomcat server we have used jvm2.

worker.loadbalancer.balance_workers is used to mention name of tomcat application server's worker name.
worker.loadbalancer.sticky_session - This will enable sticky session.
worker.loadbalancer.method - This will set load balancing method.

7. Now you need to mount Load Balancer in httpd.conf file. You can use below string in httpd.conf file to mount load balancer.

JkMount /* loadbalancer

If you want to exclude any directory then it can be specified before JkMount tag as shown below.

JkUnMount /balancer-manager loadbalancer
JkMount /* loadbalancer

8. Once all above mentioned configuration done, you can restart apache web server and test.

For Sticky Session testing:

If your application works on session, then you may need to configure sticky session at Apache Load Balancer. In workers.properties we have already set worker.loadbalancer.sticky_session to 1. But we need configuration at tomcat too. In both application server edit tomcat/conf/server.xml file and change property as shown below.
Before:
<Engine name="Catalina" defaultHost="localhost" >
After:
<Engine name="Catalina" defaultHost="localhost" jvmRoute="jvm1">
In tomcat app server 1 you can use jvm1 and in second tomcat you can use jvm2. 

Configure Log4j logger with Tomcat 6

By default tomcat uses "java.util.logging" for all internal logging in tomcat. If you want to use log4j with tomcat for logging then follow below steps.
So before we start lets remove tomcat's default java.util.logging configuration and add log4j.

You will have to download log4j jar file from here. Apart from log4j jar file you need to get tomcat-juli.jar and tomcat-juli-adapters.jar files from this location according to your tomcat version.

So after downloading log4j jar, tomcat-juli.jar and tomcat-juli-adapters.jar files follow below steps.
1. Copy tomcat-juli-adapters.jar file into tomcat/lib folder.
2. Create log4j.properties file in tomcat/lib folder with below content.

3. Now copy downloaded tomcat-juli.jar into tomcat/bin directory. You need to replace existing tomcat-juli.jar file.
4. Then remove tomcat/conf/logging.properties file.

Upgrade ImageMagick on Red Hat 5.8/ CentOS

Redhat Enterprise Linux 5.8 provides ImageMagick v 6.2.8. But you may require to upgrade it to higher version. I was looking for RPM of ImageMagick for RHEL 5.8 and I found it from ImageMagick Website. Just downloading and Installing RPM will not work, it took so much time to find its dependency packages and install it. So I have shared this post, I hope if you have this requirement, it will solve your purpose.

Operating System: Red Hat Enterprise Linux 5.8
Architecture: x86_64
ImageMagick: 6.2.8 (new version will be 6.8.5-8)


  • To check current version of ImageMagick use below command.


  • Download ImageMagick RPM using below link.
http://www.imagemagick.org/download/linux/CentOS/x86_64/ImageMagick-6.8.5-10.x86_64.rpm

Now when you will try to install it on RHEL 5.8, you will get below error message for dependency packages.

It was not easy to find dependency packages for ImageMagick, I have gone through different commands, googleing, etc. So finally I have all those dependency RPMs. You can download them using given below links.


ImageMagick-6.8.5-8.x86_64.rpm
OpenEXR-1.4.0a-5.el5.x86_64.rpm
fftw3-3.2.2-3.el5.x86_64.rpm
fltk-1.1.9-4.el5.i386.rpm
fltk-1.1.9-4.el5.x86_64.rpm
jasper-libs-1.900.1-14.el5.x86_64.rpm
libwebp4-0.3.0-31.2.x86_64.rpm
xz-libs-4.999.9-0.3.beta.20091007git.el5.x86_64.rpm
libtool-ltdl-1.5.22-7.el5_4.x86_64.rpm

Alfresco start error : internal error: ObjID already in use

You may come across below error. That happens due to hostname which does not resolved through any IP.




Solution: Make hosts file entry so that your hostname resolves IP address of your IP address.
filename: /etc/hosts


Alfresco Certificate Error javax.net.ssl.SSLHandshakeException: sun.security.validator.ValidatorException

This post applies to Alfresco 4. When we install Alfresco and start it, it shows error in catalina.out as shown below.

javax.net.ssl.SSLHandshakeException: sun.security.validator.ValidatorException: PKIX path validation failed: java.security.cert.CertPathValidatorException: timestamp check failed
        at com.sun.net.ssl.internal.ssl.Alerts.getSSLException(Alerts.java:174)
        at com.sun.net.ssl.internal.ssl.SSLSocketImpl.fatal(SSLSocketImpl.java:1699)
        at com.sun.net.ssl.internal.ssl.Handshaker.fatalSE(Handshaker.java:241)
        at com.sun.net.ssl.internal.ssl.Handshaker.fatalSE(Handshaker.java:235)
        at com.sun.net.ssl.internal.ssl.ClientHandshaker.serverCertificate(ClientHandshaker.java:1206)
...
Caused by: sun.security.validator.ValidatorException: PKIX path validation failed: java.security.cert.CertPathValidatorException: timestamp check failed
        at sun.security.validator.PKIXValidator.doValidate(PKIXValidator.java:289)
        at sun.security.validator.PKIXValidator.doValidate(PKIXValidator.java:263)

Alfresco ships with certificates which are already created, but when you install it has already reached expiration limit.

So below steps will help you to regenerate SSL certificate for Alfresco.

1. First download script to generate SSL certificate from here.
2. Keep this file in alfresco-4.0.1/alf_data/keystore folder.
3. Make it executable using below command.
chmod a+x generate_keystores.sh
4. You may need to change "ALFRESCO_HOME" and other parameters in that script based on your alfresco directory structure.
5. Now run that script file using below command.
sh generate_keystores.sh

Once you run that script, it will restart Alfresco then will ask you to provide details to generate new SSL certificate. Once done with certificate creation, you can start alfresco with valid certificate.

Liferay 6.1 Cluster Configuration using Multicast

Prerequisites
- liferay-portal-tomcat-6.1.20-ee-ga2-20120731110418084.zip
- ehcache-cluster-web-6.1.20.1-ee-ga2-20120731110418084.war
- liferay-portal-src-6.1.20-ee-ga2-20120731110418084.zip
- 2 Linux nodes with JDK installed
- Liferay Cluster License
- Firewall should be off in both nodes


Follow below steps for Liferay 6.1 EE GA2 Cluster Configuration

1. Extract liferay-portal-tomcat-6.1.20-ee-ga2-20120731110418084.zip in /opt/ in both nodes. Rename extracted directory name to liferay. (ex: /opt/liferay)
2. Create portal-ext.properties file in /opt/liferay in both nodes and add below cluster related properties. Both Liferay nodes should be pointing to single Database.


net.sf.ehcache.configurationResourceName=/myehcache/hibernate-clustered.xml
ehcache.multi.vm.config.location=/myehcache/liferay-multi-vm-clustered.xml

cluster.link.enabled=true
lucene.replicate.write=true
ehcache.cluster.link.replication.enabled=true
web.server.display.node=true
org.quartz.jobStore.isClustered=true

3. Add jvmRoute parameter in both node's /opt/liferay/tomcat-7.0.27/conf/server.xml file.
before:
<Engine name="Catalina" defaultHost="localhost" >
After:
<Engine name="Catalina" defaultHost="localhost" jvmRoute="jvm1">

note: here in node1 we are using jvm1 as jvmRoute. But in node2 you should use jvm2 as jvmRoute.

4. Now add below cluster snippet in tomcat/conf/server.xml file.

5. Now Add <distributable/> in /opt/liferay/tomcat-7.0.27/webapps/ROOT/WEB-INF/web.xml before </web-app>.

6. Now extract below two files from liferay-portal-src-6.1.20-ee-ga2-20120731110418084/portal-impl/src/ehcache. And add these files at /opt/liferay/tomcat-7.0.27/webapps/ROOT/WEB-INF/classes/myehcache folder.

liferay-multi-vm-clustered.xml
hibernate-clustered.xml

7. Now start both Liferay nodes.

Configuration at Web server.

Here I assume that you have configured mod_jk module with Apache. So I have shown load balancer configuration of worker.properties file only.
You can mount it using below command.
JkMount /* loadbalancer

mess up indent when paste clipboard content in Putty

Finally I found a good solution. It was really frustrating when we copy xml file content from Notepad or Notepad++ and paste it into Putty. You will see each and every line is messed up you have paste in putty. Till the time I got the solution, I used to indent it manually. See below screenshot, the lines are not indent when I paste content of xml file into remote terminal in Putty.


Now I followed below steps to make it indent properly.

Step1 : Paste content in putty wherever you want. You will see content is not indented.


Step2: Now press Esc to go out of "Insert" mode in Putty. Now press "gg" to go to beginning of file in Putty.
Step3: Now press "=".
Step4: Now press Shift+g.

Done.

It should indent all lines as shown in below screen.



Log rotation / retention in Alfresco

Alfresco creates daily log files in ALFRESCO_HOME folder with alfresco.log.{date} and another log file location is ALFRESCO_HOME/tomcat/logs/catalina.{date}.log.

These files are created everyday, but bydefault there is no configuration for removing old log files which are not needed. So I have jotted down steps to configure Alfresco Log rotation and retention.

By default Alfresco uses Log4j library for logging. While using Log4j, by default it uses "org.apache.log4j.DailyRollingFileAppender" class. Which helps to create log files daily by appending date at end of log file in ALFRESCO_HOME folder. But this class does not provide facility to remove old files.

So we can use "org.apache.log4j.RollingFileAppender" class from log4j, which provides facility of rotating log files and removing old log files. But there is a con, that it does rotates log files based on Size we provide, not daily bases. We can give value of "MaxFileSize" in log4j.properties file, so whenever it cross the size limit it will rename alfresco log file with alfresco.log.1 then alfresco.log.2 etc.
Here you can give limit, how many numbers of such log files you want to retain. That can be done through "MaxBackupIndex" parameter. Lets say if we keep "MaxBackupIndex" to 10. Then Alfresco will create file from alfresco.log.1 to alfresco.log.10. When it reaches to alfresco.log.10 and again if alfresco.log file exceeds size of "MaxFileSize" then it will rename alfresco.log file to alfresco.log.1 and alfresco.log.1 file will be renamed to alfresco.log.2...and so on. The older alfresco.log.10 file will be deleted in that case, as we have set "MaxBackupIndex" to 10.

So before we start lets remove tomcat's default java.util.logging configuration and add log4j.

You will have to download log4j jar file from here. But if you want, you can get it from tomcat/webapps/alfresco/WEB-INF/lib/log4j-1.2.15.jar location too. Apart from log4j jar file you need to get tomcat-juli.jar and tomcat-juli-adapters.jar files from this location.

So after downloading log4j jar, tomcat-juli.jar and tomcat-juli-adapters.jar files follow below steps.
1. Copy tomcat-juli.jar and tomcat-juli-adapters.jar files into tomcat/lib folder.
2. Create log4j.properties file in tomcat/lib folder with below content.

3. Now  copy downloaded tomcat-juli.jar into tomcat/bin directory. You need to replace existing tomcat-juli.jar file.

4. Then remove tomcat/conf/logging.properties file.

Now lets make changes at Alfresco side.There are two location where Alfresco has configured log4j.properties file for Alfresco and Share.

tomcat/webapps/alfresco/WEB-INF/classes/log4j.properties
tomcat/webapps/share/WEB-INF/classes/log4j.properties

You can add below mentioned snippet in log4j.properties files at both location to apply log rotation and retention. Here we have used "File" appender for alfresco.log file and "CONSOLE" appender for catalina.out file.

note: you can add below two appenders in log4j.properties file, but you will also have to remove words/line which are strikethrough.


# Set root logger level to error
log4j.rootLogger=error, Console, File, CONSOLE

###### Console appender definition #######

# All outputs currently set to be a ConsoleAppender.
log4j.appender.Console=org.apache.log4j.ConsoleAppender
log4j.appender.Console.layout=org.apache.log4j.PatternLayout
log4j.appender.Console.layout.ConversionPattern=%d{ISO8601} [%x] [%p] [%c{3}] [%t] [%r] %m%n

###### File appender definition #######
log4j.appender.File=org.apache.log4j.RollingFileAppender
log4j.appender.File.File=alfresco.log
log4j.appender.File.Append=true
log4j.appender.File.MaxFileSize=10MB
log4j.appender.File.MaxBackupIndex=10
log4j.appender.File.layout=org.apache.log4j.PatternLayout
log4j.appender.File.layout.ConversionPattern=%d{ABSOLUTE} %-5p [%c] %m%n

log4j.appender.CONSOLE=org.apache.log4j.RollingFileAppender
log4j.appender.CONSOLE.File=${catalina.base}/logs/catalina.out
log4j.appender.CONSOLE.MaxFileSize=10MB
log4j.appender.CONSOLE.MaxBackupIndex=10
log4j.appender.CONSOLE.layout=org.apache.log4j.PatternLayout
log4j.appender.CONSOLE.layout.ConversionPattern=%d{DATE} %x %-5p [%c{3}] %m%n




Once above changes done you can restart Alfresco and check the logs.

Hope this post helps to rotate log files in Alfresco.





org.apache.solr.common.SolrException: Internal Server Error

I have done integration of Solr with Liferay. May colleague informed me that he was getting below error frequently. So I was looking for a solution of below error since last few days. But could not find the solution. I tried tuning parameters of Solr configuration, ran load test. But as usual could not reproduce error. But at last after 2 weeks, I found the clue.

Solution:
Of course,   I saw log file at Solr too when above error occurred in Liferay. But I did not notice the time zone difference, which both applications were running. Anyway, now I checked for the error at Solr side and found below error at Solr.

Finally I got some clue. It may happened that Solr was busy running any commit operation and at the same time another commit operation has been fired by Liferay. Bydefault "writeLockTimeout" value is 1 second. So already running transaction took more than 1 sec and hence Liferay generated error saying "Internal Server Error".
So now I have set "writeLockTimeout" value to 10 sec. I hope none of the commit transaction will take atleast 10 secs.


Apache Solr Logging through Log4j



Bydefault Solr uses slf4j for logging. Follow below mentioned steps to configure Solr with log4j.

  1. First delete slf4j-jdk14-1.5.5.jar file from $CATALINA_HOME/webapps/solr/WEB-INF/lib/slf4j-jdk14-1.5.5.jar location.
  2. Now download slf4j source from http://www.slf4j.org/dist/slf4j-1.5.5.tar.gz and extract it.
  3. Now copy slf4j-log4j12-1.5.5.jar from extracted location to $CATALINA_HOME/webapps/solr/WEB-INF/lib folder.
  4. Now download http://mirrors.dcarsat.com.ar/apache/logging/log4j/1.2.17/log4j-1.2.17.tar.gz and extract it.
  5. Now copy log4j-1.2.17.jar from extracted location to $CATALINA_HOME/webapps/solr/WEB-INF/lib folder.
  6. Now create $CATALINA_HOME/webapps/solr/WEB-INF/classes directory and add file “log4j.properties” with below content.

log4j.rootLogger=ERROR, CONSOLE

log4j.logger.org.apache.solr=INFO

log4j.appender.CONSOLE=org.apache.log4j.RollingFileAppender
log4j.appender.CONSOLE.File=${catalina.base}/logs/catalina.out
log4j.appender.CONSOLE.MaxFileSize=200MB
log4j.appender.CONSOLE.MaxBackupIndex=10
log4j.appender.CONSOLE.layout=org.apache.log4j.PatternLayout
log4j.appender.CONSOLE.layout.ConversionPattern=%d{DATE} %x %-5p [%c{3}] %m%n

  1. Now rename $CATALINA_HOME/conf/logging.properties file to $CATALINA_HOME/conf/logging.properties.bak.
  2. Now restart tomcat.


Above configuration will keep writing logs in catalina.out file until it reaches 200MB size. Once it cross 200MB it will be renamed with catalina.out.1 and new after that new logs will be written to catalina.out file. MaxBackupIndex parameter will retain those many number of catalina.out files. Once it cross MaxBackupIndex limit, the old file (ex: catalina.out.10) will be removed.  

Apache's Confusing Access Control

It happens so many times, you write Access Control in Apache by giving "Oder Allow,Deny" or "Order Deny,Allow".

But how would that be executed? I used to check it after applying ACL rules. But now below table will be helpful to evaluate rules.

Example:
Looking at above example, we can see that 192.168.10.100 will be denied as per below table.

Alfresco Cluster Validation


This post would be helpful, if you have setup the Alfresco cluster and found issues(L2 cache replication, Startup error, failed cluster message transmission,etc). Below mentioned points should be taken care before you start testing Cluster in Alfresco.
(1) Make sure the hostname of both Alfresco server are reachable on Resolved IP address.
Ex: you can check with "hostname" command. This command will give you hostname. After that you can try to ping hostname. If you are getting output like "Reply from...", that means hostname is rechable. If you are not getting any other message, then contact system Admin.
(2) Validate Max Open File limit is configured. By default, it is 1024. You can check if Max Open File limit using below command.
First login with the user, by which you are going to start Alfresco. Then run below command.
ulimit -Hn
ulimit -Sn
If you are getting output of above command as "1024" or less. Then modify  /etc/security/limits.conf file and add below lines. Here we are considering that Alfresco is running as "alfresco" user.
alfresco soft nofile 1024
alfresco hard nofile 10240
After you give this command, restart the server. Then check again with ulimit command.
(3) Check if firewall is on or off. If firewall is on in Server, then you may need to open Open ports. Port depends on the setup of Alfresco Custer.
(4) Validate the Java version is Sun JDK.
(5) Validate that the directory in which Alfresco is installed does not contain spaces.
(6) Validate that the directory in which the JVM is installed does not contain spaces.
(7) Validate that the directory Alfresco will use for the repository (typically called alf_data) is both readable and writeable by the operating system user that the Alfresco process will run as.
(8) Validate that you can connect to the database as the Alfresco database user, from the Alfresco server. And you have provided proper library file to connect.
(9) Validate that the character encoding for the Alfresco database is UTF-8. (MySQL only) Validate that the storage engine for the Alfresco database is InnoDB.
(10) If you are using Openoffice then make sure that OpenOffice is able to run in headless mode

Download Oracle JDK using Wget

Downloading Oracle Jdk becomes pain when you want to download it in shell prompt using wget command.
Since oracle does not allow to download oracle jdk without accepting agreement, only wget command will not help you without some other extra switches.

Finally I have found how we can download Oracle JDK in remote server in shell using wget command. Below command can be used to download.

/bin/sh^M: bad interpreter: No such file or directory



The error  "/bin/sh^M: bad interpreter: No such file or directory" occurs when you simply do copy/paste from your windows machine to linux shell prompt. It appends ^M character at end of line. So the shell script execution gives error as mentioned below.

[root@localhost ~]# /usr/local/nagios/libexec/check_dirsize_perf.sh -d /tmp/ -w 100 -c 200
-bash: /usr/local/nagios/libexec/check_dirsize_perf.sh: /bin/sh^M: bad interpreter: No such file or directory

Solution:

You can use dos2unix command to make file usable(remove ^M). Use below command.

[root@localhost ~]# dos2unix -n /usr/local/nagios/libexec/check_dirsize_perf.sh /usr/local/nagios/libexec/check_dirsize_perf_1.sh
dos2unix: converting file /usr/local/nagios/libexec/check_dirsize_perf.sh to file /usr/local/nagios/libexec/check_dirsize_perf_1.sh in UNIX format ...

[root@localhost ~]# mv /usr/local/nagios/libexec/check_dirsize_perf_1.sh /usr/local/nagios/libexec/check_dirsize_perf.sh
mv: overwrite `/usr/local/nagios/libexec/check_dirsize_perf.sh'? y
[root@localhost ~]# /usr/local/nagios/libexec/check_dirsize_perf.sh -d /tmp -w 100 -c 200 -u mb
293  mb -  critical

Hope this will help.





Enable Caching in Tomcat

Caching helps when you do not want some particular files to be download each time. Sometimes you may required to put extra header so that cached content get expires it self after sometime. 

So here is a post, how you can implement a Caching mechanism on Tomcat.

1. First You will have to download "Cache Filter" jar file from the below location.
http://code.google.com/p/cache-filter/downloads/list

2. Once you are done with download, put that jar file in tomcat/webapps/ROOT/WEB-INF/lib location.

3. Now open tomcat/webapps/ROOT/WEB-INF/web.xml file and add filter and filter-mapping properties as mentioned below.
4. Once you done with changes, you can restart Tomcat and check the expires header through firebug.

Hope this helps!!!