ASA Bridge Groups
Hello everyone. I know I’ve been neglecting this blog for too long. Can’t promise that things are going to change, but I have a good post for today.
I was recently exposed to some new technology while working with a customer. I had to learn it pretty quickly. This post is about a new feature in the Cisco ASA 8.4 code called Bridge Groups. This is essentially the addition of BVI interfaces, which have existed in IOS forever. This feature is useful when running an ASA transparently, but not physically inline. Running a firewall physically inline works well, but it can limit you to the number of available interfaces you have on each firewall. Adding physical interfaces to a firewall is expensive. This feature also saves you from using a context per firewalled VLAN on your ASA. Here we’ll use a 3750 for physical connectivity and use BVIs to force traffic through the firewall. Here is the physical topology:
I’m trying to convey quite a bit of information with this diagram. We have a layer 3 switch as the “core” and an ASA 5505 trunked to it. We have two hosts, PC1 and Server which are in VLAN 50 and VLAN 25, respectively. VLAN 26 is only for management of the ASA. VLAN 50 (and 2050) is where we’ll focus for this post. I’m going to start with the config here as I think it will make more sense when I try to explain.
vlan 25 name Servers vlan 50 name L2_Firewalled_Vlan50 vlan 2050 name L3_Firewalled_Vlan50 interface GigabitEthernet1/0/5 description ASA Inside switchport trunk encapsulation dot1q switchport trunk allowed vlan 26,50 switchport mode trunk spanning-tree portfast interface GigabitEthernet1/0/7 description ASA Outside switchport trunk encapsulation dot1q switchport trunk allowed vlan 2050 switchport mode trunk spanning-tree portfast interface Vlan25 description Servers-Media ip address 192.168.25.1 255.255.255.0 interface Vlan2050 ip address 192.168.50.1 255.255.255.0
This config is pretty basic. We have created some VLANs, configured some trunks and SVIs, etc. The key thing to notice here is that there is no L3 for VLAN 50, the L3 for PC1 exists in VLAN 2050.
firewall transparent interface Ethernet0/1 switchport trunk allowed vlan 26,50 switchport mode trunk interface Ethernet0/2 switchport trunk allowed vlan 2050 switchport mode trunk interface Vlan26 nameif management bridge-group 2 security-level 100 interface BVI2 ip address 192.168.26.5 255.255.255.0 interface Vlan50 nameif inside bridge-group 1 security-level 100 interface Vlan2050 nameif outside bridge-group 1 security-level 0 interface BVI1 ip address 220.127.116.11 255.255.255.0 access-list inside-in extended permit ip any any access-group inside-in in interface inside route management 0.0.0.0 0.0.0.0 192.168.26.1
This is where the magic happens. First, we configure the firewall in transparent mode, then we configure out interfaces and allow the necessary VLANs, 50 and 26 (management) on the e0/1, which is the “inside” interface, and 2050 on e0/2, which is the “outside” interface. We then configure VLAN 26 for management and add it to bridge group 2, which has an IP and default route configured at the bottom, again, this is for management only. Then we configure VLANs 50 and 2050 with 50 as inside and 2050 as outside. Both VLANs are in bridge group 1. In my testing it appears that the BVI interface REQUIRES an IP address to pass traffic, but which IP you pick seems irrelevant, so I’ve used 18.104.22.168/24 here. Finally we have our inside-in access-list allowing any to any and we’ve applied it to the inside interface inbound.
Hopefully this is beginning to make sense. Essentially, all PC1 traffic destined outside of its subnet will traverse the firewall. When PC1 initiates traffic outside of its subnet, it will ARP for its gateway, which will begin on VLAN 50, then be bridged through the firewall to VLAN 2050. The 3750 will reply on 2050 which will be sent back to 50. All L3 traffic should flow this way. I’ve tried to make a diagram to show the flow:
Let’s do some testing:
C:\Users\PC1>telnet 192.168.25.10 22 SSH-2.0-OpenSSH_5.2 Protocol mismatch. Connection to host lost. C:\Users\PC1>
Here we see that SSH is open from PC1 to Server.
Now let’s break that connectivity:
ASA(config)# access-list inside-in extended deny tcp any host 192.168.25.10 eq ssh ASA(config)# no access-list inside-in extended permit ip any any ASA(config)# access-list inside-in extended permit ip any any ASA(config)# sh run access-list inside-in access-list inside-in extended deny tcp any host 192.168.25.10 eq ssh access-list inside-in extended permit ip any any
We added a deny statement for anything originating from the inside destined to TCP port 22 on Server and we allow everything else. We have verified that the ACL is correct.
Let’s test again:
C:\Users\PC1>telnet 192.168.25.10 22 Connecting To 192.168.25.10...Could not open connection to the host, on port 22: Connect failed C:\Users\PC1>
The results are rather anti-climactic. The session just does not work.
It looks a bit more interesting on the ASA:
ASA# sh logg %ASA-4-106023: Deny tcp src inside:192.168.50.100/49198 dst outside:192.168.25.10/22 by access-group "inside-in" %ASA-4-106023: Deny tcp src inside:192.168.50.100/49198 dst outside:192.168.25.10/22 by access-group "inside-in" %ASA-4-106023: Deny tcp src inside:192.168.50.100/49198 dst outside:192.168.25.10/22 by access-group "inside-in"
We see the session being denied on the ASA as expected.
That’s about it for this one. We can now run our transparent firewalls through an intermediate switch and we’re also able to run multiple VLANs through a transparent firewall without using up contexts. All we need to do to allow more VLANs to be firewalled is create the L2/L3 on the switch, add the VLANs to the trunks and configure the ASA with the appropriate SVIs and bridge groups (and ACLs). Hope this was useful.
Disclaimer: This is a very new feature to me. It’s working well in the lab, but please let me know if I have anything wrong or there is a better way to accomplish something.
|Print article||This entry was posted by Colby on September 5, 2011 at 2:13 pm, and is filed under Tutorials. Follow any responses to this post through RSS 2.0. You can leave a response or trackback from your own site.|
No trackbacks yet.
about 2 years ago - 6 comments
It’s official, the CCIE DC has been announced. Here’s the meat of the announcement:
“Cisco announced today that a new expert-level certification for data center professionals will be available starting September 2012. This expert-level certification validates a candidate’s expert knowledge of implementing and troubleshooting complex data center networks. The program offers candidates the knowledge and skills required to design, implement, operate, monitor, and troubleshoot complex data center networks. Products tested in this certification include Cisco Catalyst 3750, MDS 9222i, Nexus 7709(sic), 5548, 2232, 1000v and Cisco Unified Computing System (UCS), and Cisco Application Control Engine Appliance.”
This is a very interesting certification. It’s definitely crossing the line between a Data Center engineer and a Network Engineer. I think I might give it a shot. I have almost zero knowledge of UCS and Storage, but I think I could learn it. Working with everything on the blueprint is almost my dream job.
Post your thoughts in the comments, especially if you’re considering attaining this cert.
Here’s the blueprint:
Cisco Data Center Infrastructure – NXOS
- Implement NXOS L2 functionality
Implement VLANs and PVLANs
Implement Spanning-Tree Protocols
Implement Unidirectional Link Detection (UDLD)
Implement Fabric Extension via the Nexus family
- Implement NXOS L3 functionality
Implement Basic EIGRP in Data Center Environment
Implement Basic OSPF in Data Center Environment
Implement BFD for Dynamic Routing protocols
- Implement Basic NXOS Security Features
Implement AAA Services
Configure IP ACLs, MAC ACLs and VLAN ACLs
Configure Port Security
Configure DHCP Snooping
Configure Dynamic ARP Inspection
Configure IP Source Guard
Configure Cisco TrustSec
- Implement NXOS High Availability Features
Implement First-Hop Routing Protocols
Implement Graceful Restart
Implement nonstop forwarding
Implement vPC and VPC+
Implement Overlay Transport Protocol (OTV)
- Implement NXOS Management
Implement SPAN and ERSPAN
Implement Smart Call Home
Manage System Files
Implement NTP, PTP
Configure and Verify DCNM Functionality
- NXOS Troubleshooting
Utilize SPAN, ERSPAN and EthAnalyzer to troubleshoot a Cisco Nexus problem
Utilize NetFlow to troubleshoot a Cisco Nexus problem
Given an OTV problem, identify the problem and potential fix
Given a VDC problem, identify the problem and potential fix
Given a vPC problem, identify the problem and potential fix
Given an Layer 2 problem, identify the problem and potential fix
Given an Layer 3 problem, identify the problem and potential fix
Given a multicast problem, identify the problem and potential fix
Given a FabricPath problem, identify the problem and potential fix
Given a Unified Fabric problem, identify the problem and potential fix
Cisco Storage Networking
- Implement Fiber Channel Protocols Features
Implement Port Channel, ISL and Trunking
Implement Basic and Enhanced Zoning
Implement FC Domain Parameters
Implement Fiber Channel Security Features
Implement Proper Oversubscription in an FC environment
- Implement IP Storage Based Solution
Implement IP Features including high availability
Implement iSCSI including advanced features
Implement SAN Extension tuner
Implement FCIP and Security Features
Implement iSCSI security features
Validate proper configuration of IP Storage based solutions
- Implement NXOS Unified Fabric Features
Implement basic FC in NXOS environment
Implement Fiber channel over Ethernet (FCoE)
Implement NPV and NPIV features
Implement Unified Fabric Switch different modes of operation
Implement QoS Features
Implement FCoE NPV features
Implement multihop FCoE
Validate Configurations and Troubleshoot problems and failures using Command Line, show and debug commands.
Cisco Data Center Virtualization
- Manage Data Center Virtualization with Nexus1000v
Implement QoS, Traffic Flow and IGMP Snooping
Implement Network monitoring on Nexus 1000v
Implement n1kv portchannels
Troubleshoot Nexus 1000V in a virtual environment
- Implement Nexus1000v Security Features
Dynamic ARP Inspection
IP Source Guard
Access Control Lists
Configuring Private VLANs
Cisco Unified Computing
- Implement LAN Connectivity in a Unified Computing Environment
Configure different Port types
Implement Ethernet end Host Mode
Implement VLANs and Port Channels.
Implement Pinning and PIN Groups
Implement Disjoint Layer 2
- Implement SAN Connectivity in a Unified Computing Environment
Implement FC ports for SAN Connectivity
Implement FC Port Channels
Implement FC Trunking and SAN pinning
- Implement Unified Computing Server Resources
Create and Implement Service Profiles
Create and Implement Policies
Create and Implement Server Resource Pools
Implement Updating and Initial Templates
Implement Boot From remote storage
Implement Fabric Failover
- Implement UCS Management tasks
Implement Unified Computing Management Hierarchy using ORG and RBAC
Configure RBAC Groups
Configure Remote RBAC Configuration
Configure Roles and Privileges
Create and Configure Users
Implement Backup and restore procedures in a unified computing environment
Implement system wide policies
- Unified Computing Troubleshooting and Maintenance
Manage High Availability in a Unified Computing environment
Configure Monitoring and analysis of system events
Implement External Management Protocols
Collect Statistical Information
Collect TAC specific information
Implement Server recovery tasks
Cisco Application Networking Services – ANS
- Implement Data Center application high availability and load balancing
Implement standard ACE features for load balancing
Configuring Server Load Balancing Algorithm
Configure different SLB deployment modes
Implement Health Monitoring
Configure Sticky Connections
Implement Server load balancing in HA mode
about 2 years ago - 3 comments
Real short one today. This post is about Nexus port profiles. Port profiles are great for ensuring consistency across port configurations. They allow us to configure a template which is inherited by a group of ports. There are three types of port-profiles: Ethernet, Interface-VLAN (SVI) and Port-Channel. In my example, we’ll be configuring several ports as “VM Server” ports. Some may be asking why one would choose these over the simple “interface range” command. In my opinion, port profiles are more strict. The range command configures any range of ports where a port profile configures ALL ports which inherit it. Any new configuration added to the profile is pushed to the inheriting ports as well.
Here’s an example:
n5k-1(config)# port-profile type ethernet VM n5k-1(config-port-prof)# switchport access vlan 225 n5k-1(config-port-prof)# spanning-tree port type edge n5k-1(config-port-prof)# spanning-tree bpduguard enable n5k-1(config-port-prof)# state enabled
Pretty basic. We create an “ethernet” port profile named VM and assign some config to it. The command “state enabled” makes this profile usable, without this command we wouldn’t be able to inherit the profile on a port.
Here is how we assign the config to a group of ports:
n5k-1(config)# int e1/22 - 25 n5k-1(config-if-range)# inherit port-profile VM
We select a range of ports and tell them to inherit the VM profile. That’s all.
Now we will do some verification:
n5k-1(config-port-prof)# sh port-profile port-profile VM type: Ethernet description: status: enabled max-ports: 512 inherit: config attributes: switchport access vlan 225 spanning-tree port type edge spanning-tree bpduguard enable evaluated config attributes: switchport access vlan 225 spanning-tree port type edge spanning-tree bpduguard enable assigned interfaces: Ethernet1/22 Ethernet1/23 Ethernet1/24 Ethernet1/25
This command tells us everything. We see our that the profile is enabled, the config it’s using and what ports are inheriting it.
Here’s another way to find profile information:
n5k-1(config-port-prof)# sh run port-profile port-profile type ethernet VM switchport access vlan 225 spanning-tree port type edge spanning-tree bpduguard enable state enabled interface Ethernet1/22 inherit port-profile VM interface Ethernet1/23 inherit port-profile VM interface Ethernet1/24 inherit port-profile VM interface Ethernet1/25 inherit port-profile VM
That’s it. Pretty simple to understand and configure, but also very useful.
about 2 years ago - 4 comments
Hi guys, I’m back for my annual post.:/
I’ve been working with a good amount of Nexus gear lately. Today we’ll configure Configuration Synchronization offered on the Nexus 5K platform. This feature allows one to create a switch profile on a vPC member and push the profile’s configuration to the peer. This is crucial as vPC configurations need to match exactly on both peers. If configurations don’t match, the channel could be suspended. Here’s our topology:
We’re using an Enhanced vPC (EvPC) here (supported in 5.1(3)N1(1) and up) topology – the FEXes are dual-homed and connected to the 5Ks via vPC and we’re also running a vPC to the host. Config Sync is almost a necessity here. We’re using 169.254.0.0/30 for the IPs Peer Keepalive links (stole this practice from Chris Marget. It’s important to note that CFS (Cisco Fabric Services – this is the magic that makes config sync work) communicates over the Managment 0/peer-keepalive interface.
This is the base config which is entered on both peers:
N5k-1(config)# cfs ipv4 distribute N5k-1(config)# conf sync N5k-1(config-sync)# switch-profile 5k-Profile Switch-Profile started, Profile ID is 1 N5k-1(config-sync-sp)# sync-peers destination 169.254.0.2 N5k-2(config)# cfs ipv4 distribute N5k-2(config)# conf sync N5k-2(config-sync)# switch-profile 5k-Profile Switch-Profile started, Profile ID is 1 N5k-2(config-sync-sp)# sync-peers destination 169.254.0.1
We’ve enabled CFS and created the 5k-Profile on both peers. We also had to tell the switches to sync with each other. Again, this will be done over the management/keepalive interface.
The following should be entered on the peer you’re using as the configuration point. I’ve chosen 5k-1 here:
N5k-1(config-sync-sp)# int e101/1/1, e102/1/1 N5k-1(config-sync-sp-if-range)# channel-group 50 mode active N5k-1(config-sync-sp-if-range)# interface po50 N5k-1(config-sync-sp-if)# description Server-1 N5k-1(config-sync-sp-if)# switchport mode access N5k-1(config-sync-sp-if)# switchport access vlan 100 N5k-1(config-sync-sp-if)# vpc 50 N5k-1(config-sync-sp-if)# verify Verification successful... N5k-1(config-sync-sp-if)# commit Proceeding to apply configuration. This might take a while depending on amount of configuration in buffer. Please avoid other configuration changes during this time. Commit Successful
Here we’ve picked a range of ports and joined them to a port-channel. Then we enter the port-channel and configure our settings – notice that we’ve made this “vpc 50″. Before committing we run the “verify” command. This command should run through the config and ensure that it can be applied to both peers. Finally we commit the changes. The switch pauses for a bit and then tells us we’ve succeeded. A couple notes on this. I’ve seen the switch return a successful verification but still fail on the commit. This is typically due to pre-existing commands that will cause the range or port-channel config to fail. The other note is if you do fail your commit, you can run a “show switch-profile
Now we will do some verification:
N5k-2# sh run int e101/1/1 interface Ethernet101/1/1 switchport access vlan 100 channel-group 50 mode active N5k-2# sh run int e102/1/1 interface Ethernet102/1/1 switchport access vlan 100 channel-group 50 mode active N5k-2# sh run int po50 interface port-channel50 description Server-1 switchport access vlan 100 vpc 50
Everything looks good on the 5k-2 ports. We can see the configuration came through as expected. Keep in mind that if a port is configured using a profile you will not be able to configure it manually; all additions/changes need to be made through the profile.
That’s the basics for config sync. You can do quite a bit with this and it is definitely helpful in vPC environments. I made this post mostly because I was unable to find this information posted in a way I liked. Hopefully this is helpful to some.
Disclaimer: This is a new feature to me. It’s working well in the lab, but please let me know if I have anything wrong or there is a better way to accomplish something.
about 3 years ago - 6 comments
I had an interesting conversation the other day regarding OSPF. I don’t want to give too much away, so here we go. This is the topology:
Assume interfaces have correct bandwidth statements and no cost commands have been added. R1 and R2 are redistributing the 192.168.1.0/24 prefix as E2 with a cost of 100.
Which path does R4 take to the 192.168.1.0/24 network? Does it load balance? Explain.
I started a thread on Networking-Forum on this as well. Post your answer here in the comments or over there.
Update: Here’s another question. What happens if we change it to:
Everything is the same except R2 is now redistributing as E1. Which path and why?
about 3 years ago - 13 comments
I’ve been meaning to post this for some time. Awhile back there was a thread on Networking Forum where someone mentioned that 2960s can route now. The 2960 is now a layer 3 switch. I was skeptical, but then I was pointed to this link. I was very, very surprised. I’m not sure why Cisco decided to add this functionality to the 2960s, but I’m definitely grateful. As of 12.2(55)SE, 2960s are layer 3 switches (with some limitations which I’ll cover later). This knowledge came in handy shortly after reading that thread. I was working on a circuit upgrade for a remote side at my previous company. The circuit was ordered incorrectly and I ended up in need of a layer 3 switch ASAP. The tech we’d sent was leaving the next day, so there was no time to ship him anything. Luckily, we had some 2960s on site.
Configuring 2960s to route is pretty simple. The Switch Database Management template (SDM) needs to be changed to “lanbase-routing”. A reboot is (always) needed after changing the SDM template. After reboot, it’s just like enabling routing on any other L3 switch with the command “ip routing” from global config.
First we’ll change the SDM template:
SwitchA(config)#sdm prefer lanbase-routing Changes to the running SDM preferences have been stored, but cannot take effect until the next reload. Use 'show sdm prefer' to see what SDM preference is currently active. SwitchA(config)#^Z SwitchA#reload System configuration has been modified. Save? [yes/no]: y Proceed with reload? [confirm]
After changing the SDM template, we are reminded that we’ll need to reboot and also given a command to verify the change after the next boot.
Now we verify:
SwitchA#show sdm prefer The current template is "lanbase-routing" template. The selected template optimizes the resources in the switch to support this level of features for 8 routed interfaces and 255 VLANs. number of unicast mac addresses: 4K number of IPv4 IGMP groups + multicast routes: 0.25K number of IPv4 unicast routes: 4.25K number of directly-connected IPv4 hosts: 4K number of indirect IPv4 routes: 0.25K number of IPv4 policy based routing aces: 0 number of IPv4/MAC qos aces: 0.125k number of IPv4/MAC security aces: 0.375k
The change was successful and we’re given the details about this SDM template.
Now seems like a good time to touch on the limitations of the layer 3 capabilities on 2960s. As we see in the output above, we’re limited to 8 routed interfaces. These will be SVIs. At this point, the 2960s don’t support routed physical interfaces (“no switchport”). Another important note is that we’re only allowed 16 static routes and there is no dynamic routing capability.
Now we’ll enable IP routing and configure a couple SVIs:
SwitchA#conf t SwitchA(config)#ip routing SwitchA(config)# SwitchA(config)#int vlan 15 SwitchA(config-if)#ip add 192.168.15.1 255.255.255.0 SwitchA(config-if)# SwitchA(config-if)#int vlan 25 SwitchA(config-if)#ip add 192.168.25.1 255.255.255.0 SwitchA(config)#^Z SwitchA#sh ip route ... C 192.168.15.0/24 is directly connected, Vlan15 C 192.168.25.0/24 is directly connected, Vlan25
Even now, I’m still amazed that we can do this with a 2960. As expected, it’s working. We have two SVIs and we can see the routing table reflect this.
As you can all see, I’m still pretty wowed. There are many scenarios where layer 3 2960s could be useful.
about 3 years ago - No comments
Today we’ll go over the process to connect an IOS voice gateway/CME (Call Manager Express) to the PSTN. I set this up last night and thought it would be a good post. I’ll briefly touch on using a SIP trunk as backup/failover too.
I’ve been running a SIP trunk to Flowroute for quite awhile, but I just recently got a “landline” from my ISP because they’re doing a promotion where it’s basically free. I’m keeping my SIP trunk, but I’ll be using it as backup since all US calling through the ISP is free. I’m using a 2811 with an NM-HD-2V and a VIC2-4FXO.
First we’ll verify that the card is recognized and working:
EDGE#sEDGE#sh voice port summ IN OUT PORT CH SIG-TYPE ADMIN OPER STATUS STATUS EC =============== == ============ ===== ==== ======== ======== == 1/1/0 -- fxo-ls up dorm idle on-hook y 1/1/1 -- fxo-ls up dorm idle on-hook y 1/1/2 -- fxo-ls up dorm idle on-hook y 1/1/3 -- fxo-ls up dorm idle on-hook y
Everything looks good there, the router is recognizing the card and its ports. 1/1/0 is connected to the ISP’s MTA.
Now we’ll configure the dial peers:
dial-peer voice 1 pots preference 1 destination-pattern 1[2-9].[1-9]....... incoming called-number .T port 1/1/0 forward-digits all ! dial-peer voice 2 pots preference 1 destination-pattern 1800....... incoming called-number .T port 1/1/0 forward-digits all ! dial-peer voice 3 pots preference 1 destination-pattern 911 incoming called-number .T port 1/1/0 forward-digits all
We’ve configured three dial peers here. First note that the dial peer type is “pots”, this is used when the destination is an analog port (like FXO). Next you see the “preference” command, lower is better, making these peers more preferred than my SIP peers (with a preference of 2). The “destination-pattern” command is matching the dialed string, sort of like a static route. For the first dial peer, We’re matching 11 digits including the 1, area code (digits 2 through 9, wildcard which matches 0 through 9, and then digits 1 through 9), then seven wildcards matching 0 through 9. This is my convoluted way of blocking 900 numbers. For the incoming call, we’re matching any digits. The “port” command tells the router where to send the call when it matches the patter, port 1/1/0 here. Then we tell the router to forward all digits. This is important because it will strip the explicitly defined digits, which we don’t want here, we want all digits sent to the PSTN.
Next we’ll configure incoming calling, using Private-line Automatic Ringdown (PLAR):
voice-port 1/1/0 connection plar 5001 caller-id enable
Not much to that one. We go into the port config and use the “connection plar ” command. PLAR tells the router to automatically forward to an extension when the line goes off-hook. So when this port gets an incoming call from the PSTN (which takes it off-hook), it will instantly forward it to extension 5001. We’ve also used the “caller-id enable”, which is pretty self-explanatory; it enables incoming caller-id on this FXO port.
That’s all for this one. This could be the first(ish) of many voice-related posts. I’ll be moving through the voice exams (hopefully quickly) in the next few months and if there is interest, I can try to do some posts on things I’m learning/studying. Real voice (CUCM) can be tough blog about because it’s mostly GUI based, which requires (lots of) screenshots. Making 11ty screenshots for every post could get old quick. Post in the comments if you’d like to see voice topics, and if you have anything specific you’d like to read about.
Disclaimer: I am, by no means, a voice guy (yet?), so if you see any errors please let me know in the comments. I can say that this works, but I wouldn’t doubt if it’s not the “best” way.
about 3 years ago - 7 comments
Another quick one. Today I’m going to cover a simple, but very useful OSPF command: “show ip ospf rib”. This command is similar to “show ip route ospf”, but goes a bit deeper.
If you’ve ever done a routing protocol migration, you know how important it can be to see each protocol’s full routing table. Much of the time AD makes this difficult. Administrative Distance (AD) is the believability of a routing protocol on a Cisco device. The default AD values are:
Lower is better. If a router has identical routes from RIP and OSPF, the OSPF routes will be added to the table. If it’s EIGRP versus OSPF, EIGRP will win.
Company ATN Solutions is migrating from EIGRP to OSPF. They’ve chosen to run both protocols simultaneously, while leaving the AD values at the default. This will allow both protocols to co-exist without affecting the routing domain. EIGRP routes will stay in the table due to EIGRP’s lower AD. I’m not going through the migrations steps or really any detail related to how this would be performed, just using this to demonstrate the command.
During this migration, we’ll need to verify that all existing EIGRP prefixes are also being learned by OSPF (we’ll use process number 200). If we were masochists, we could look at the LSDB to determine this, but that’s not really ideal. So we’ll use the “show ip ospf 200 rib”. First we’ll look at the existing RIB:
EDGE#sh ip route eigrp D 192.168.15.0/24 [90/28416] via 192.168.13.1, 00:00:12, FastEthernet0/0 D 192.168.25.0/24 [90/28416] via 192.168.13.1, 00:00:09, FastEthernet0/0 D 192.168.111.0/24 [90/28416] via 192.168.13.1, 00:00:04, FastEthernet0/0 D 192.168.10.0/24 [90/28416] via 192.168.13.1, 00:00:46, FastEthernet0/0 D 192.168.11.0/24 [90/28416] via 192.168.13.1, 00:00:19, FastEthernet0/0 EDGE#sh ip route ospf EDGE#
We see five EIGRP routes and nothing for OSPF.
Now let’s try out the command:
EDGE#sh ip ospf 200 rib OSPF local RIB for Process 200 Codes: * - Best, > - Installed in global RIB * 192.168.10.0/24, Intra, cost 2, area 0 via 192.168.13.1, FastEthernet0/0 * 192.168.11.0/24, Intra, cost 2, area 0 via 192.168.13.1, FastEthernet0/0 * 192.168.15.0/24, Intra, cost 2, area 0 via 192.168.13.1, FastEthernet0/0 * 192.168.25.0/24, Intra, cost 2, area 0 via 192.168.13.1, FastEthernet0/0 * 192.168.111.0/24, Intra, cost 2, area 0 via 192.168.13.1, FastEthernet0/0
And there it is. We see that OSPF is learning the same prefixes as EIGRP. The output is similar to “show ip bgp” in that * = Best, and > = Installed. We could now, theoretically, feel comfortable in taking the next step on our migration path, maybe raising EIGRP’s AD to make OSPF more preferred.
That’s all for today. Another quick post to make up for my hiatus. Post any questions in the comments.
about 3 years ago - No comments
Dropping in to do a quick post today. Sorry for the ridiculous lack of content lately. I’ve been busy with finding/changing jobs and new responsibilites and all that.
Today I’m going to cover “object groups” on ASAs. I was never a big fan of these, which I realized had a lot to do with using them behind others, not actually writing them myself. It takes awhile (for me, at least) to wrap your head around what the person before you was trying to accomplish. This is what put me off object groups. Though, I discovered that if I write them myself, I love them, lol. They can be hugely useful. They’re even available in IOS now (as of 12.4(20)T). Here’s an example of when they’d be used:
We need to allow several hosts (192.168.1.100-105) to access a group of servers (192.168.2.10-15) on multiple ports (21, 22, 25, 80, 443). Without object groups, this would produce a pretty lenghty ACL. First I’ll do the object group config, then I’ll show what it would look like with normal ACL entries.
object-group network OG_Hosts description host addresses network-object host 192.168.1.100 network-object host 192.168.1.101 network-object host 192.168.1.102 network-object host 192.168.1.103 network-object host 192.168.1.104 network-object host 192.168.1.105 ! object-group network OG_Servers description server addresses network-object host 192.168.2.10 network-object host 192.168.2.11 network-object host 192.168.2.12 network-object host 192.168.2.13 network-object host 192.168.2.14 network-object host 192.168.2.15 ! object-group service OG_Hosts-To-Server-Ports service-object icmp echo service-object icmp echo-reply service-object tcp eq 21 service-object tcp eq 22 service-object tcp eq 25 service-object tcp eq 80 service-object tcp eq 443
Pretty simple. We create some object groups matching IPs for the hosts and servers, then we match ICMP traffic and various TCP ports. Notice that there are two object group types used here, the first is “network”. This type allows us to specify IPs and subnets. The second type is “service”. This type allows us to match different ports and protocols.
Now let’s put it together in an ACL:
access-list ACL_Hosts-To-Servers extended permit object-group OG_Hosts-To-Server-Ports object-group OG_Hosts object-group OG_Servers
Amazingly, we only need one line. We’ve configured an ACL line with three object groups. Notice that the ports actually come first, which threw me a bit when I first saw object groups in actions. Other than that, everything is relatively normal. We need to specify “object-group” before each one, and as usual, it’s source, then destination.
Now let’s look at part of the “show access-list” output. This will show us what the firewall sees and matches, and also what we were saved from typing manually:
Firewall# sh access-list ACL_Hosts-To-Servers access-list ACL_Hosts-To-Servers; 252 elements access-list ACL_Hosts-To-Servers line 1 extended permit object-group OG_Hosts-To-Server-Ports object-group OG_Hosts object-group OG_Servers 0xc08e86b0 access-list ACL_Hosts-To-Servers line 1 extended permit icmp host 192.168.1.100 host 192.168.2.10 echo (hitcnt=0) 0xb9c5e5bf access-list ACL_Hosts-To-Servers line 1 extended permit icmp host 192.168.1.101 host 192.168.2.10 echo (hitcnt=0) 0x946345e5 access-list ACL_Hosts-To-Servers line 1 extended permit icmp host 192.168.1.102 host 192.168.2.10 echo (hitcnt=0) 0xc233a45f access-list ACL_Hosts-To-Servers line 1 extended permit icmp host 192.168.1.103 host 192.168.2.10 echo (hitcnt=0) 0x509dadab access-list ACL_Hosts-To-Servers line 1 extended permit icmp host 192.168.1.104 host 192.168.2.10 echo (hitcnt=0) 0xfa1dbbd2 access-list ACL_Hosts-To-Servers line 1 extended permit icmp host 192.168.1.105 host 192.168.2.10 echo (hitcnt=0) 0xedc7eaea access-list ACL_Hosts-To-Servers line 1 extended permit icmp host 192.168.1.100 host 192.168.2.10 echo-reply (hitcnt=0) 0x77938723 access-list ACL_Hosts-To-Servers line 1 extended permit icmp host 192.168.1.101 host 192.168.2.10 echo-reply (hitcnt=0) 0x809068d5 access-list ACL_Hosts-To-Servers line 1 extended permit icmp host 192.168.1.102 host 192.168.2.10 echo-reply (hitcnt=0) 0x1730c200 access-list ACL_Hosts-To-Servers line 1 extended permit icmp host 192.168.1.103 host 192.168.2.10 echo-reply (hitcnt=0) 0xc555b262 access-list ACL_Hosts-To-Servers line 1 extended permit icmp host 192.168.1.104 host 192.168.2.10 echo-reply (hitcnt=0) 0xdd2ca47f access-list ACL_Hosts-To-Servers line 1 extended permit icmp host 192.168.1.105 host 192.168.2.10 echo-reply (hitcnt=0) 0xb5d1bc04
I’m not pasting all 252 lines, that would just be a waste of space. You get the idea though, the firewall is showing us what our single ACE actually does. All those rules come from our one line. That’s the power of object groups.
Just a short one today. Again, sorry for the lack of posts. Hopefully I can get back to posting regularly. I hope this all made sense. If you have any questions, please post in the comments.
about 4 years ago - 9 comments
It’s Jared from CCNPJourney.com. Colby had asked me a couple weeks ago if I would be interested in posting some articles on his blog as he’s been fairly busy lately, and of course I said yes. So I thought for my introductory post on the blog I would do a brief write-up on how to use Iperf!
For starters here’s a little bit of info on Iperf. Iperf is a tool that many system/network admins use to measure the bandwidth on a network, as well as the quality of the path on that network. It is capable of generating traffic using TCP and UDP. The TCP and UDP tests are useful for performing the following kinds of tests:
- Latency (response time or RTT): can be measured with the Ping utility.
- Jitter: can be measured with an Iperf UDP test.
- Datagram loss: can again, be measured with an Iperf UDP test.
- Bandwidth tests are done using the Iperf TCP tests.
Iperf also allows you to run simultaneous tests, and bi-directional. Developers in the community have also created a GUI for Iperf called Jperf, which is a Java based GUI that allows you to save settings, and more easily make changes to your settings. For information on Iperf head on over to the Wiki page, or their page over at SourceForge.
Now lets get down to business…
Lets start out by downloading Iperf. In this example I will be using the Windows version of Iperf, but feel free to use what you choose, it’s all the same switches regardless of the OS). You will need Iperf on a machine you wish to use for the “client” role, and one for the server”.
Once you have put the files on each machine we can begin. Start out by opening a command prompt and then navigating to the folder you have Iperf stored in. You will then enter the commands below, on the server and client respectively to begin your first basic Iperf test!
Enter the command “iperf.exe -s”, without the quotes, to start Iperf in server mode (indicated by “-s”).??
The screenshot above shows what you will be presented with after you’ve started Iperf in server mode. It shows you the port that was automatically chosen (which can be manually changed), as well as the TCP window size, which again is chosen by default based on the OS, but can be changed.
To start Iperf in client mode (using no arguments) enter the command “iperf.exe -c x.x.x.x”, where “x.x.x.x” equals the IP address of your Iperf server in the above step.
In the screenshot above you can see the final results of a basic Iperf test. Again presenting you with the TCP Window size and port that were chosen by default. You will also see the resulting bandwidth calculation as well as the amount of data transferred during the test.
Here’s a look at the output from entering the “iperf.exe -h” command.
Usage: iperf [-s|-c host] [options] iperf [-h|--help] [-v|--version] Client/Server: -f, --format [kmKM] format to report: Kbits, Mbits, KBytes, MBytes -i, --interval # seconds between periodic bandwidth reports -l, --len #[KM] length of buffer to read or write (default 8 KB) -m, --print_mss print TCP maximum segment size (MTU - TCP/IP header) -o, --output <filename> output the report or error message to this specified file -p, --port # server port to listen on/connect to -u, --udp use UDP rather than TCP -w, --window #[KM] TCP window size (socket buffer size) -B, --bind <host> bind to <host>, an interface or multicast address -C, --compatibility for use with older versions does not sent extra msgs -M, --mss # set TCP maximum segment size (MTU - 40 bytes) -N, --nodelay set TCP no delay, disabling Nagle's Algorithm -V, --IPv6Version Set the domain to IPv6 Server specific: -s, --server run in server mode -D, --daemon run the server as a daemon -R, --remove remove service in win32 Client specific: -b, --bandwidth #[KM] for UDP, bandwidth to send at in bits/sec (default 1 Mbit/sec, implies -u) -c, --client <host> run in client mode, connecting to <host> -d, --dualtest Do a bidirectional test simultaneously -n, --num #[KM] number of bytes to transmit (instead of -t) -r, --tradeoff Do a bidirectional test individually -t, --time # time in seconds to transmit for (default 10 secs) -F, --fileinput <name> input the data to be transmitted from a file -I, --stdin input the data to be transmitted from stdin -L, --listenport # port to recieve bidirectional tests back on -P, --parallel # number of parallel client threads to run -T, --ttl # time-to-live, for multicast (default 1) Miscellaneous: -h, --help print this message and quit -v, --version print version information and quit [KM] Indicates options that support a K or M suffix for kilo- or mega- The TCP window size option can be set by the environment variable TCP_WINDOW_SIZE. Most other options can be set by an environment variable IPERF_<long option name>, such as IPERF_BANDWIDTH.
That about sums up this post on Iperf. It’s really a very basic program that can help a lot in day to day troubleshooting. I plan to make another post on how to use Jperf as well, so keep an eye out for that!
Hope you enjoyed!