To return expected results, you can:
Reduce the number of search terms.
Each term you use focuses the search further.
Check your spelling.
A single misspelled or incorrectly typed term can change your result.
Try substituting synonyms for your original terms.
For example, instead of searching for "java classes", try "java training"
Did you search for an IBM acquired or sold product ?
If so, follow the appropriate link below to find the content you need.
Load Balancer for IPv4 and IPv6 requires ethernet like environment to function properly. Operations such as MAC forwarding, neighbor discovery, advertising cluster address, by using a return address require MAC layer details. In layer 3 mode, MAC layer functionality is mostly handled by OSA card. Hence LB has to depend on OS level configurations and few other methods to function as expected.
LB cannot forward traffic or advertise cluster addresses without MAC layer data. Hence back-end servers logically seem to be on different network.
NOTE: Because of a bug in Linux
®
kernel, high availability does not work on SLES 11 and RHEL 6.
With SLES 11 SP2, high availability is working.
This problem is layer 3 specific only.
Following is a sample configuration of "Load Balancer for IPv4 and IPv6" for Layer 3 network mode and IPv4. Change commands for IPv6 as required.
Assumptions:
Load Balancer workstation has NFA (Non-forwarding Address): 1.2.3.4
For high availability, assume backup Load Balancer has NFA 1.2.3.3
Cluster address: 1.2.3.5
eth0: primary interface configured in layer 3
lo: loop-back interface on Load Balancer machine
Back-end workstation:
Server1 address: 1.2.3.6
Server2 address: 1.2.3.7
eth0: primary interface on both the servers configured in layer 3.
lo: loop-back interface on both the servers.
Sample requirement:
Distribute HTTP traffic on port 80 between Server1 and Server2.
Configuration Steps on ULB workstation:
A) Add Clusters, Ports, Servers
dscontrol set loglevel 1
dscontrol executor start
dscontrol cluster add 1.2.3.5
dscontrol port add 1.2.3.5@80
# Since ARP is not available in layer3, LB uses GRE or IPIP encapsulation to forward packets to
# back-end. Here I would use GRE encapsulation but you can use IPIP as well.
dscontrol server add 1.2.3.5@80@1.2.3.6 encapforward yes encaptype gre encapcond always
dscontrol server add 1.2.3.5@80@1.2.3.7 encapforward yes encaptype gre encapcond always
NOTE:
If NAT forwarding is used instead of encapsulated MAC forwarding, the return address must be treated as a cluster address for the configuration in step B) and removing the configuration in step C).
B) Advertising Cluster address
Configure the cluster address on eth0 and corresponding
"iptables"
DROP rule.
For high availability, add these commands to the
goActive
script.
Without high availability, the commands can be added to the
goInOp
script or added to the operating system configuration such that settings persist after reboot.
ip -f inet addr add 1.2.3.5/<prefix> dev eth0
iptables -t filter -A INPUT -d 1.2.3.5 -j DROP
C) Removing the cluster configuration
Remove the cluster address on eth0 and remove the corresponding
"iptables"
DROP rule.
For high availability, the two commands in step B must be reversed in the
goStandby
and the
goIdle
scripts.
In a stand-alone load balancer, the commands can be reversed in the
goIdle
script or left to persist after the load balancer is stopped if they are not enabled in the
goInOp
script.
iptables -t filter -D INPUT -d 1.2.3.5 -j DROP
ip -f inet addr del 1.2.3.5/<prefix> dev eth0
NOTE:
If there are existing filters configured that will intercept traffic to the load balancer addresses, add the load balancer filter to the top of the filter chain.
To list existing filters:
iptables -t filter -L
To add new filter to the top of the chain:
iptables -t filter -I input 1 -d 1.2.3.5 -j DROP
To remove the filter from the top of the chain, use the same syntax provided in step C).
NOTE:
Neighbor discovery does not occur because LB cannot send ARP in layer3. Hence "server report" would always show server availability as "Unavailable".
Configuration Steps on Back-end workstation:
NOTE:
These configuration steps are not required with NAT forwarding.
Put these commands in appropriate .rc and sysctl.conf files to survive reboots.
A) Configure GRE tunnel interface.
1) modprobe ip_gre
2) ip tunnel add ulbgre mode gre remote 1.2.3.4 local 1.2.3.6 (make this 1.2.3.7 for other back-end)
If using high availability, also add a definition for the backup load balancer:
ip tunnel add ulbgre mode gre remote 1.2.3.3 local 1.2.3.6 (make this 1.2.3.7 for other back-end)
3) ip link set ulbgre up
4) ip addr add 1.2.3.5/32 dev ulbgre scope host
B) ARP Suppression
1) sysctl -w net.ipv4.conf.all.arp_ignore=3
2) sysctl -w net.ipv4.conf.all.arp_announce=2
C) Disable Reverse Path filtering
1) sysctl -w net.ipv4.conf.lo.rp_filter=0
2) sysctl -w net.ipv4.conf.eth0.rp_filter=0
3) sysctl -w net.ipv4.conf.all.rp_filter=0
4) sysctl -w net.ipv4.conf.ulbgre.rp_filter=0
5) sysctl -w net.ipv4.conf.default.rp_filter=0
[{"Product":{"code":"SSEQTP","label":"WebSphere Application Server"},"Business Unit":{"code":"BU059","label":"IBM Software w\/o TPS"},"Component":"Edge Component","Platform":[{"code":"PF016","label":"Linux"}],"Version":"9.0;8.5;8.0;7.0","Edition":"Network Deployment","Line of Business":{"code":"LOB45","label":"Automation"}}]