Whispers & Screams
And Other Things

Classful IP Addressing (IPv4)

cisco-ccna-subnetting-02IP addressing is among the most important topics in any examination of TCP/IP. The IP address is a 32 bit binary identifier which, when configured correctly, enables each machine on an IP network to be uniquely identified. It is used to allow communication with any specific device on the network.

An IP address is defined in software and is configured dynamically as needed by software whether controlled by a human or a software process (as opposed to a MAC address which is a permanent, hard coded hardware address which cannot be easily changed). IP addressing was designed to allow media independent communication between any two hosts on the same, or different, IP networks.

Terminology

As a precursor to looking at IP Addressing in some detail, lets define some basic terminology.

Byte - A byte is a unit of binary information in that most commonly consists of eight bits. In the course of this post, the term Octet will also be used to represent one and the same thing.

IP Address - An IP address is a 32 bit binary number which represents, when assigned to a network device, its unique Network Layer (Layer 3) address. IP addresses are commonly described in Dotted Decimal notation for ease of human readability. Dotted Decimal notation is the conventional way of describing an IP address eg. 192.168.1.1 and is formed by separating the 32 bit IP address into 4 x 8-bit Octets, converting each Octet into a decimal number between 0 and 255 and separating each of these Octets with a dot. An IP Address is also frequently referred to as a Network Address and the terms can be used interchangeably however IP Address is by far the most common.

Broadcast Address - On any IP Network, the Broadcast Address is the address used to send to all hosts which are members of and connected to the IP Network.

IP Addressing

As mentioned previously, an IP address is made up of 32 binary bits. It is extremely important to always bear this fact in mind when working with IP addresses as failing to do so can significantly impair ones ability to fully understand and manipulate the IP addressing system as required.
IP addresses are commonly described in one of three ways -

    1. Dotted Decimal (As described above)

 

    1. Binary (As a 32 bit binary number)

 

    1. Hexadecimal (Rarely used but can be seen when addresses are stored within programs or during packet analysis)



One important aspect of an IP address is that it is an hierarchical address. This has a number of advantages not least of which is the fact that it enables addresses to be aggregated together which greatly simplifies the mechanisms which are used to route traffic around the Internet.
In IPv4 there are 4.3 billion IP addresses available in theory and without this mechanism for route aggregation it would be necessary for Internet routers to know the location of each one of these connected devices.
The hierarchical system used by IPv4 is one which separates the IP address into two components, namely a network part and a host part.
In practice this "two component" system is further split down as the host part is frequently subdivided into even smaller subnetworks. In this post however we will limit or discussion to the "two component" system.

This term, "subnetwork", (often abbreviated to subnet) is one which is used frequently within the network engineering community to such an extent that it has become part of the jargon of the trade. This has only served to enhance its status as a term which has a great deal of complexity behind it but it is actually extremely simple. A subnetwork (subnet) is any subdivision of a larger network. It really is as simple as that.
The Two Component System / Network part and Host part

In order to make IP addresses hierarchical, a two component system has been created. This system splits the IP address into two parts known as The Network Part and The Host Part. This can be likened to a telephone number where (typically) the first 4 or 5 digits represent the town or city and the subsequent 6 or 7 digits represent the individual line.
The designers of this hierarchical addressing scheme created 5 classes of IP address by splitting up the full range of 4.3 billion addresses in a logical way. These 5 classes (or subdivisions) are known as Class A, B, C, D, and E networks.
For the purposes of this post we shall concern ourselves primarily with classes A, B and C however I shall briefly introduce each of the classes in the following section.
The 5 Network Classes
The image below depicts the 5 classes of IP Network as well as some of the basic features associated with each.

Screenshot_2

Class A - Class A networks were designed for use in networks which needed to accommodate a very large number of hosts.
As can be seen from the diagram, the first bit in a Class A address is always 0.
In each of network classes A, B and C, we can also see that the addresses are split into two parts, namely Network and Hosts.
These parts can be likened to the two parts of the telephone number described earlier.
The Network part is like the city code and the Host part is like the rest of the telephone number.
As you can see from the image, the division between the Network and Host part is set after the 8th bit. This means that we have 7 bits available to represent different Networks and 24 bits available to represent the individual hosts within each of the Class A networks.
It is clear therefore that, since the first bit must always be 0, the lowest network address available is 00000000.X.X.X (0 in decimal) and the highest network address available is 01111111.X.X.X (127 in decimal).
It would seem therefore that the range of addresses available to Class A networks is 0.X.X.X up to 127.X.X.X (Where X represents the Host part) but I shall demonstrate later that the 0 and 127 networks are reserved therefore the Class A address range runs from 1.X.X.X to 126.X.X.X in practice.

Class B - In Class B networks, the split between the network part and the host part happens after the 16th bit.
In any Class B network address the first two bits must always be set to 10. This leaves 14 bits to define the network number and allows addresses to range from 10000000.00000000.X.X up to 10111111.11111111.X.X .
These binary addresses equate in decimal to the first two Octets of Class B addresses ranging from 128.0.X.X up to 191.255.X.X
Class C - The pattern now emerging is that Class C addresses use the first 3 Octets to define the Network part of their addresses. Again, as with Class A and B networks some bits are permanently defined and in the case of Class C network addresses, the first 3 bits are always set to 110.
This means that we have 21 bits available to define the network part of Class C network addresses ranging (in binary) from 11000000.00000000.00000000.X up to 11011111.11111111.11111111.X which in decimal equates to 192.0.0.X up to 223.255.255.X

Class D - Class D (224-239) is reserved for Multicast Addressing and a post based explicitly on this addressing will be published ASAP and linked to from here. Class D addressing is beyond the scope of this post however if required please click this link for more detail. Class D Networks and IP Multicasting. .

Class E - Class E (240-255) is reserved for scientific experimentation and research and if any subsequent posts on this blog examine Class E networks, they will be linked to from here.

Continue reading
368 Hits
0 Comments

Lightweight Directory Access Protocol (LDAP)

wpid-d53372ab83ca060500bfdd46e1045836ldap2Sometimes traditional network engineers who arrive at the networking industry via the world of telecommunications can often find themselves unfamiliar with certain facets of the industry. Such facets can include network security and servers. A protocol which lies at the intersection between network security  and server technology is LDAP which stands for Lightweight Directory Access Protocol.

 

 

 

So what is LDAP and what is it used for? Lets take a look at the protocol in some detail.


 

Within the OSI model, LDAP sits at layer 7 and is, as such, an application layer protocol. LDAP is also an "Open" protocol which means that its standards are public information and it is not associated with or owned by any individual commercial organisation. Its primary purpose is to act as a protocol for accessing and maintaining distributed directory information services over an IP network having been specified to act seamlessly as part of a TCP/IP modeled network.


 



The most common usage for LDAP is to provide a mechanism for a "single sign on" across a distributed multi facility IT estate in order to minimise the authentication across multiple services. LDAP is based on a subset of the more heavily specified and older X500 protocol which was designed to be compatible with the more abstract OSI model.


 



When people talk about “LDAP”, they are really talking about the complex combination of business rules, software and data that allow you to log in and get access to secure resources.


 

A client starts an LDAP session by connecting to an LDAP server, called a Directory System Agent (DSA), by default on TCP port and UDP port 389 and 636 for LDAPS. Global Catalog is available by default on ports 3268, and 3269 for LDAPS. The client then sends an operation request to the server, and the server sends responses in return. With some exceptions, the client does not need to wait for a response before sending the next request, and the server may send the responses in any order. All information is transmitted using Basic Encoding Rules (BER). These types of encodings are commonly called type-length-value or TLV encodings. The LDAP server hosts something called the directory-server database. As such, the LDAP protocol can be thought of loosely as a network enabled database query language.


 

The client may request the following operations:StartTLS — use the LDAPv3 Transport Layer Security (TLS) extension for a secure connection
Bind — authenticate and specify LDAP protocol version
Search — search for and/or retrieve directory entries
Compare — test if a named entry contains a given attribute value
Add a new entry
Delete an entry
Modify an entry
Modify Distinguished Name (DN) — move or rename an entry
Abandon — abort a previous request
Extended Operation — generic operation used to define other operations
Unbind — close the connection (not the inverse of Bind)

 

 

As was alluded to above, the directory-server database is indeed a database and, as a database, is structured in accordance with the rules of its own schema. The contents of the entries in an LDAP domain are governed by a directory schema, a set of definitions and constraints concerning the structure of the directory information tree (DIT).


 



The schema of a Directory Server defines a set of rules that govern the kinds of information that the server can hold. It has a number of elements, including:


 



Attribute Syntaxes—Provide information about the kind of information that can be stored in an attribute.
Matching Rules—Provide information about how to make comparisons against attribute values.
Matching Rule Uses—Indicate which attribute types may be used in conjunction with a particular matching rule.
Attribute Types—Define an object identifier (OID) and a set of names that may be used to refer to a given attribute, and associates that attribute with a syntax and set of matching rules.
Object Classes—Define named collections of attributes and classify them into sets of required and optional attributes.
Name Forms—Define rules for the set of attributes that should be included in the RDN for an entry.
Content Rules—Define additional constraints about the object classes and attributes that may be used in conjunction with an entry.
Structure Rule—Define rules that govern the kinds of subordinate entries that a given entry may have.
Attributes are the elements responsible for storing information in a directory, and the schema defines the rules for which attributes may be used in an entry, the kinds of values that those attributes may have, and how clients may interact with those values.


 

Clients may learn about the schema elements that the server supports by retrieving an appropriate subschema subentry.


 

The schema defines object classes. Each entry must have an objectClass attribute, containing named classes defined in the schema. The schema definition of the classes of an entry defines what kind of object the entry may represent - e.g. a person, organization or domain. The object class definitions also define the list of attributes that must contain values and the list of attributes which may contain values.


 

For example, an entry representing a person might belong to the classes "top" and "person". Membership in the "person" class would require the entry to contain the "sn" and "cn" attributes, and allow the entry also to contain "userPassword", "telephoneNumber", and other attributes. Since entries may have multiple ObjectClasses values, each entry has a complex of optional and mandatory attribute sets formed from the union of the object classes it represents. ObjectClasses can be inherited, and a single entry can have multiple ObjectClasses values that define the available and required attributes of the entry itself. A parallel to the schema of an objectClass is a class definition and an instance in Object-oriented programming, representing LDAP objectClass and LDAP entry, respectively.


 

Directory servers may publish the directory schema controlling an entry at a base DN given by the entry's subschemaSubentry operational attribute. (An operational attribute describes operation of the directory rather than user information and is only returned from a search when it is explicitly requested.)


 

Server administrators can add additional schema entries in addition to the provided schema elements. A schema for representing individual people within organizations is termed a white pages schema.


 

We will go on in subsequent posts to examine some of the concepts described here in more detail.
Continue reading
382 Hits
0 Comments

An Introduction to layer 4 handling of RT traffic on satellite networks.

slideSatellite telecommunications is, by its very nature, prone to long propagation delays and higher error rates which can impair the performance of the TCP protocol and most specifically the use of TCP to transport real time applications. At Apogee Internet, the use of satellite broadband services to enable the use of such services is a core component of the services delivered. As such therefore, it is important to understand these effects and how they impact the efficiency of the TCP exchange and the consequent streaming video delivery.

In this regard, we have examined the field using a framework of techniques which can serve to maximise the usability of these channels and in some cases to simply ensure they are usable in the first place. There are various implementations of TCP that can be used which enhance protocol performance by means of adjusting the role of acknowledgements or delaying them.

Most existing solutions do not live up to the requirements of today’s real time applications which at best results in inefficient utilisation of bandwidth and in extreme cases can affect the transponder in use quite dramatically.

Satellite systems have evolved through the delivery of television services to the point where nowadays, they have an integrated part to play in any national broadband IP delivery strategy. With their ubiquitous reach and ability to broadcast, todays core communications satellites enable the delivery of time sensitive information over macrogeographical areas. These systems however do have their drawbacks such as bandwidth asymmetry. Also, due to the inherent propagation delays involved in transmission across such vast distances, these networks always have a high Bandwidth Delay Product (BDP) and can certainly be described as Elephant Networks (LFN’s).

These long transmission distances also result in low power channels which in turn bring about high relative Bit Error Rates which are always higher than terrestrial networks.

The mainstream layer 4 protocols in use today are not best placed to make efficient use of these conditions. TCP for example, built on the principles of Slow Start, Congestion Management and Additive Increase Multiplicative Decrease was designed for far more error free networks such as hard wired networks demonstrates that it is manifestly unsuitable for use in heterogeneous network environments such as satellite links.

TCP has three major shortfalls in these circumstances.

1                     Ineffective Bandwidth Utilisation

2                     Chatty Congestion Prevention Mechanisms

3                     Wasteful Windowing

In future posts, we shall go on to examine the implications of this shortcoming in the layer 4 mechanisms as well as ways to mitigate the undesirable effects.

Continue reading
352 Hits
0 Comments

The EIGRP (Enhanced Interior Gateway Routing Protocol) metric

EIGRP (Enhanced Interior Gateway Routing Protocol) is a network protocol that lets routers exchange information more efficiently than was the case with older routing protocols. EIGRP which is a proprietary protocol evolved from IGRP (Interior Gateway Routing Protocol) and routers using either EIGRP and IGRP can interoperate because the metric (criteria used for selecting a route) used with one protocol can be translated into the metrics of the other protocol. It is this metric which we will examine in more detail.

Using EIGRP, a router keeps a copy of its neighbour’s routing tables. If it can’t find a route to a destination in one of these tables, it queries its neighbours for a route and they in turn query their neighbours until a route is found. When a routing table entry changes in one of the routers, it notifies its neighbours of the change. To keep all routers aware of the state of neighbours, each router sends out a periodic “hello” packet. A router from which no “hello” packet has been received in a certain period of time is assumed to be inoperative.

EIGRP uses the Diffusing-Update Algorithm (DUAL) to determine the most efficient (least cost) route to a destination. A DUAL finite state machine contains decision information used by the algorithm to determine the least-cost route (which considers distance and whether a destination path is loop-free).

Figure 1




The Diffusing Update Algorithm (DUAL) is a modification of the way distance-vector routing typically works that allows the router to identify loop free failover paths.  This concept is easier to grasp if you imagine it geographically. Consider the map of the UK midlands shown in Figure1. The numbers show approximate travel distance, in miles. Imagine that you live in Glasgow. From Glasgow, you need to determine the best path to Hull. Imagine that each of Glasgow’s neighbours advertises a path to Hull. Each neighbour advertises its cost (travel distance) to get to Hull. The cost from the neighbour to the destination is called the advertised distance. The cost from Glasgow itself is called the feasible distance.
In this example, Newcastle reports that if Glasgow routed to Hull through Newcastle, the total cost (feasible distance) is 302 miles, and that the remaining cost once the traffic gets to Newcastle is only 141 miles. Table1 shows distances reported from Glasgow to Hull going through each of Glasgow’s neighbours.

Table 1




Glasgow will select the route with the lowest feasible distance which is the path through Newcastle.

If the Glasgow-Newcastle road were to be closed, Glasgow knows it may fail over to Carlisle without creating a loop. Notice that the distance from Carlisle to Hull (211 miles) is less than the distance from Glasgow to Hull (302 miles). Because Carlisle is closer to Hull, routing through Hull does not involve driving to Carlisle and then driving back to Glasgow (as it would for Ayr). Carlisle is a guaranteed loop free path.

The idea that a path through a neighbour is loop free if the neighbour is closer is called the feasibility requirement and can be restated as "using a path where the neighbour's advertised distance is less than our feasible distance will not result in a loop."

The neighbour with the best path is referred to as the successor. Neighbours that meet the feasibility requirement are called feasible successors. In emergencies, EIGRP understands that using feasible successors will not cause a routing loop and instantly switches to the backup paths.

Notice that Ayr is not a feasible successor. Ayr's AD (337) is higher than Newcastle's FD (302). For all we know, driving to Hull through Ayr involves driving from Glasgow to Ayr, then turning around and driving back to Glasgow before continuing on to Hull (in fact, it does). Ayr will still be queried if the best path is lost and no feasible successors are available because potentially there could be a path that way; however, paths that do not
meet the feasibility requirement will not be inserted into the routing table without careful consideration.

EIGRP uses a sophisticated metric that considers bandwidth, load, reliability and delay. That metric is:




[latex]256, *, left(K_1, *, bandwidth ,+, dfrac {K_2 ,*, bandwidth}{256 - load}, +, K_3 ,*, delayright), *,dfrac {K_5}{reliability ,+, K_4}[/latex]


Although this equation looks intimidating, a little work will help you understand the maths and the impact the metric has on route selection.

You first need to understand that EIGRP selects path based on the fastest path. To do that it uses K-values to balance bandwidth and delay. The K-values are constants that are used to adjust the relative contribution of the various parameters to the total metric. In other words, if you wanted delay to be much more relatively important than bandwidth, you might set K3 to a much larger number.

You next need to understand the variables:

    • Bandwidth—Bandwidth is defined as (100 000 000 / slowest link in the path) kbps. Because routing protocols select the lowest metric, inverting the bandwidth (using it as the divisor) makes faster paths have lower costs.

 

    • Load and reliability—Load and reliability are 8-bit calculated values based on the performance of the link. Both are multiplied by a zero K-value, so neither is used.

 

    • Delay—Delay is a constant value on every interface type, and is stored in terms of microseconds. For example, serial links have a delay of 20,000 microseconds and Ethernet lines have a delay of 1000 microseconds. EIGRP uses the sum of all delays along the path, in tens of microseconds.



By default, K1=K3=1 and K2=K4=K5=0. Those who followed the maths will note that when K5=0 the metric is always zero. Because this is not useful, EIGRP simply ignores everything outside the parentheses. Therefore, given the default K-values the equation becomes:




[latex]256, *, left(1, *, bandwidth ,+, dfrac {0 ,*, bandwidth}{256 - load}, +, 1 ,*, delayright), *,dfrac {0}{reliability ,+, 0}[/latex]


Substituting the earlier description of variables, the equation becomes 100,000,000 divided by the chokepoint bandwidth plus the sum of the delays:




[latex]256, *, left(dfrac {10^7}{min(bandwidth)}, +,sum,dfrac {delays}{10}right)[/latex]


As a final note, it is important to remember that routers running EIGRP will not become neighbours unless they share K-values. That said however you really should not change the K-values from the default without a compelling reason.

Continue reading
366 Hits
0 Comments

Rapid Spanning Tree Protocol

The IEEE 802.1D Spanning Tree Protocol was designed to keep a switched or bridged network loop free, with adjustments made to the network topology dynamically. A topology change typically takes 30 seconds, with a port moving from the Blocking state to the Forwarding state after two intervals of the Forward Delay Timer. As technology has improved, 30 seconds has become an unbearable length of time to wait for a production network to fail over or "heal" itself during a problem.

The IEEE 802.1w standard was developed to used 802.1D's principal concepts and make the resulting convergence much faster. This is also known as the Rapid Spanning Tree Protocol (RSTP), which defines how switches must interact with each other to keep the network topology loop free, in a very efficient manner.

As with 802.1D, RSTP's basic functionality can be applied as a single instance or multiple instances. This can be done by using RSTP as the underlying mechanism for the Cisco-proprietary Per-VLAN Spanning Tree Protocol (PVST+). The resulting combination is called Rapid PVST+ (RPVST+). RSTP is also used as part of the IEEE 802.1s Multiple Spanning Tree (MST) operation. RSTP operates consistently in each, but replicating RSTP as multiple instances requires different approaches.

RSTP Port Behaviour
In 802.1D,each switch port is assigned a role and a state at any given time. Depending on the ports proximity to the Root Bridge, it takes on one of the following roles:

    • Root Port

 

    • Designated Port

 

    • Blocking Port (neither root nor designated)



Tge Cisco proprietary UplinkFast feature also reserved a hidden alternate port role for ports that offered parallel paths to the root but were in the Blocking state.

Each switch port is also assigned one of five possible states:

    • Disabled

 

    • Blocking

 

    • Listening

 

    • Learning

 

    • Forwarding



Only the forwarding state allows data to be sent and received. A ports state is somewhat tied to its role. For example, a blocking port cannot be a root port or a designated port.

RSTP achieves its rapid nature by letting each switch interact with its neighbours through each port. This interaction is performed based on a ports role, not strictly on the BPDU's that are relayed from the Root Bridge. After the role is determined, each port can be given a state that determines what it does with incoming data.

The Root Bridge in a network using RSTP is elected just as with 802.1D- by the lowest Bridge ID. After all switches agree on the identity of the root, the following port roles are determined.

    • Root Port - The one switch port on each switch that has the best root path cost to the root. This is identical to 802.1D. (By definition the root bridge has no root ports.)

 

    • Designated Port - The switch port on a network segment that has the best root path cost to the root.

 

    • Alternate Port - A port that has an alternative path to the root, different than the path the root port takes. This path is less desirable than that of the root port. (An example of this is an access-layer switch with two uplink ports; one becomes the root port, and the other is an alternate port.)

 

    • Backup port - A port that provides a redundant (but less desirable) connection to a segment where another switch port already connects. If that common segment is lost, the switch might or might not have a path back to the root.



RSTP defines port states only according to what the port does with incoming frames. (Naturally, if incoming frames are ignored or dropped, so are outgoing frames.) Any port role can have any of these port states:

    • Discarding - Incoming frames are simply dropped; no MAC addresses are learned. (This state combines the 802.1D Disabled, Blocking and Listening states because all three did not effectively forward anything. The Listening state is not needed because RSTP can quickly negotiate a state change without listening for BPDUs first.

 

    • Learning - Incoming frames are dropped but MAC addresses are learned.

 

    • Forwarding - Incoming frames are forwarded according to MAC addresses that have been (and are being) learned.

 

Continue reading
391 Hits
0 Comments