OpenMeetings 2.1 or later is required to use clustering. One database is used for all OpenMeetings servers, so all database tables are shared across OM instances. Certain folders should be shared between all servers to allow access to the files/recording.
- Multiple OM servers should be set up as described in Installation
- All servers should be configured to have same Time zone (To avoid Schedulers to drop user sessions as outdated)
- All servers should be configured to use the same DB
Multicast should be set up on all servers
Here are the steps for *nix like systems Reference article
- Check your network interface supports multicast by running the following command in a terminal window:
MULTICASTagainst your network interface, it means your kernel is compiled with Multicast option and your network interface supports it.
- Check if multicast routing is configured:
220.127.116.11 – 18.104.22.168in the first table, it means you need to add your desired mutlicast address to your routes table.
- To add the multicast address:
sudo route add -net 22.214.171.124 netmask 240.0.0.0 dev eth0
eth0corresponds to your network interface name
Make sure you run this command on all servers you want to be multicast enabled.
- Using netstat check if the multicast IP is visible in your route table (see step 2.)
- Using tcpdump and ping check if your server is able to multicast.
Run the following command on all the servers.
sudo tcpdump -ni eth0 host 126.96.36.199
ping -t 1 -c 2 188.8.131.52
sudo route -v delete -net 184.108.40.206 netmask 240.0.0.0
- Add users who can connect to the database remotely
/opt/om/webapps/openmeetings/WEB-INF/classes/META-INF/persistence.xmlset correct server address, login and password. Also uncomment following line:
<property name="openjpa.RemoteCommitProvider" value="tcp(Addresses=127.0.0.1)" />
If files and recordings using the same physical folders the files and recordings will be available for each node. You can do this using Samba or NFS, for example. For using NFS do the following:
- To ease upgrade process set OM data dir to some external folder: for ex. /opt/omdata
- Install NFS to the data server. In the file
/etc/exportsadd the following lines:
- Install NFS common tools to other nodes. In the file
/etc/fstabdo the following:
10.1.1.1:/opt/omdata /opt/omdata nfs timeo=50,hard,intr
OM nodes configuration
In the file
instance-namefor each server to unique value
server.urlfor each server to full public URL of this server (please NOTE using of numeric IP address might broke HTTPS))
- Comment out/delete following blocks:
<network> <join> <auto-detection enabled="false"/> <multicast enabled="false"/> <tcp-ip enabled="false"/> <aws enabled="false"/> </join> </network>
- Un-comment following block (ensure it contains valip parameters):
<network> <join> <multicast enabled="true"> <multicast-group>220.127.116.11</multicast-group> <multicast-port>54327</multicast-port> <multicast-time-to-live>32</multicast-time-to-live> <multicast-timeout-seconds>2</multicast-timeout-seconds> </multicast> </join> <interfaces enabled="true"> <interface>192.168.1.*</interface> </interfaces> </network>
- In case there are more than one network interface with multicast support and/or additional hazelcast configuration is required Based on the following documentation: https://docs.hazelcast.org/docs/4.0/manual/html-single/index.html
Ensure everything works as expected
- Set up the cluster and login with two users, go to the same room (also check before room entering that the status page with the room list shows the correct number of participants before entering the room). You should login to the same server initially, the server will redirect you for the conference room to the appropriate server automatically. Both users should be in the same room.
- Do the same with only two users but go to _different_ rooms. The calculation should send both users to different servers, cause based on the calculation two different rooms on a cluster with two nodes should go exactly one room for each node. You can now login really to node1 and node2 of your cluster while those users are loggedin and go to
Administration > Connectionsand check in the column "Server Name" where they are located. They should be on different server.