How to Convert an Existing Non-Redundant Setup into a Redundant Setup?
To achieve this, the following broad steps will have to be followed:
- Convert the existing Non-Redundant Manager into a Redundant Manager
- Add one more redundant manager to the setup.
These steps are elaborately explained in the following sections.
How to Convert an Existing Non-Redundant Manager into a Redundant Manager?
In order to achieve this, do the following:
- Stop the existing eG manager.
- Execute the setup_cluster script in the <EG_INSTALL_DIR>\lib directory (on Windows. On Unix, this will be /opt/egurkha/bin) of the manager system.
- Using the script, set the existing manager as the primary or the secondary manager of the redundant setup.
- Install the eG license that supports a redundant manager setup.
- Start the manager.
- If you have set the existing manager as the primary manager, then connect to the primary manager and login to its administrative interface.
- Open the manage/unmanage page using the menu sequence: Infrastructure -> Components -> Manage/Unmanage/Delete.
- Manage any component using this page and update the management by clicking on the update button. This needs to be done in order to inform the clustered environment of the currently monitored components.
- Avoid performing configuration changes until the additional managers are configured and added to the redundant setup.
- If you have set the existing manager as the secondary manager, then do not start this manager until a primary manager is added to the redundant setup and is started.
How to Add Another Manager to the Redundant Setup?
To achieve this, do the following:
- Install the new manager and configure it to use the same database server as the old manager or a separate database server.
- Then, execute setup_cluster (on Windows it will be in the <EG_INSTALL_DIR>\lib directory; on Unix, it will be in the /opt/egurkha/bin directory) on the new manager system.
- Using the script, set this manager as the primary or the secondary manager of the redundant setup.
-
If you want even the historical data in the old manager’s database to be replicated to the database of the new manager, then, follow the steps given below to achieve this:
- Take a backup of the old manager’s database.
- Restore it to the new manager’s database server using the database name assigned to the new manager’s database. For example, if you have configured the new manager to use eg_database as its database, then you have to to restore the old manager’s database it to the new manager’s database server as eg_database.
- On the other hand, if you want the old manager to share with the new manager only that data it receives after the redundant cluster is fully configured and started (i.e., only the future data and not the past data), then the above-mentioned backup-restore procedure can be dispensed with.
- Next, to ensure that the new manager is updated with the details of IC tests configured on the old manager, follow the steps given below:
-
First, from the ini files listed below, search for those entries that are relevant to the IC tests configured for the old manager, and copy them to the same ini files in the new manager. To search, use the name of the IC tests.
- eg_agents.ini
- eg_db.ini
- eg_dbase.ini
- eg_specs.ini
- eg_tables.ini
- eg_tests.ini
- eg_thresholds.ini
- eg_udtests.ini
Typically, all these files will be available in the /opt/egurkha/manaer/config directory of the old and new managers.
- Similarly, search for entries related to IC tests in the eg_newtests.ini file (in the /opt/egurkha/manager/config/tests directory), and copy them to the file with the same name in the new manager’s system.
- Copy the test classes defined in the old manager for the new tests added using IC to the new manager system. Typically, the test classes reside in the /opt/egurkha/manager/config/tests directory (on Windows, it will be in the <EG_INSTALL_DIR>\manager\config\tests directory), and will be named in the format: <IC_Test_Name>.class. Copy the .class files from the above-mentioned location in the old manager system, to the same location in the new manager system.
- Next, in the /opt/egurkha/bin/database folder of the old manager, look for sql files that are named after the IC tests configured on the old manager. These files typically contain the queries required for creating the tables for the IC tests. Run each of these queries on the new manager’s database as the eG database user, so that the required tables are created therein.
-
To apply the IC-based changes on Unix, execute the upload script from the /opt/egurkha/bin directory. To do this, switch to the /opt/egurkha/bin directory from the command prompt, and type the command: ./upload. The script will then request you to specify the Java home directory:
Please enter the location of your Java home directory :
Once the home directory is specified, press the Enter key on the keyboard to update the configurations.
- To apply the IC-based changes on Windows, simply execute the upload.bat batch file in the <EG_INSTALL_DIR>\lib directory.
- Copy the icons/images used by IC tests from the old manager and place them in the appropriate locations (/opt/egurkha/manager/tomcat/webapps/final/monitor/eg_images/eg_layout/eg_icons/ and (/opt/egurkha/manager/tomcat/webapps/final/admin/eg_images/) in the new manager.
- Install the license that enables the redundant manager capability.
- Start the new manager.
- Then, if the new manager is set as the primary manager, connect to it and login to its administrative interface.
- Open the manage/unmanage page using the menu sequence: Components -> Manage.
- Manage any component using this page and update the management by clicking on the update button. This needs to be done in order to inform the clustered environment of the currently monitored components.
-
If you have set the new manager as the secondary manager, then do not start this manager until a primary manager is added to the redundant setup and is started.
Note:
The secondary manager in a cluster will sync its time with that of the primary manager. Therefore, if the new manager is set as the primary manager of a cluster, and both the old and new managers exist in different time zones, then a data gap or data overlap (as the case may be) is bound to occur.