Tuesday 5 March 2013

Understanding Open Systems Interconnection Reference Model (OSI)



Understanding Open Systems Interconnection Reference Model (OSI)


Understanding Open Systems Interconnection Reference Model (OSI)

The OSI reference model arrived in 1984. OSI model is used as an abstract framework and most operating systems and protocols adhere to it. This is the standard model for networking protocols and distributed applications and is the International Standard Organization’s Open System Interconnect (ISO/OSI) model.
It defines seven network layers.
OSI

Layer 1 – Physical Layer

Physical layer defines the cable or physical medium itself, e.g., thinnet, thicknet, unshielded twisted pairs (UTP). All media are functionally equivalent. The main difference is in convenience and cost of installation and maintenance. Converters from one media to another operate at this level. This layer converts bits into voltage for transmission. Couples of the standard interfaces at this layer are HSSI and X.21.

Layer 2 – Data Link Layer

Data Link layer defines the format of data on the network. The outer format of data changes at each layer and here it comes to a point to be translated into LAN or WAN technology binary format for proper transmission. A network data frame .e. packet includes checksum, source and destination address, and data. The largest packet that can be sent through a data link layer defines the Maximum Transmission Unit (MTU). The data link layer handles the physical and logical connections to the packet’s destination, using a network interface. A host connected to an Ethernet would have an Ethernet interface to handle connections to the outside world, and a loopback interface to send packets to it. Ethernet addresses a host using a unique, 48-bit address called its Ethernet address or Media Access Control (MAC) address. This number is unique and is associated with a particular Ethernet device. Hosts with multiple network interfaces should use the same MAC address on each. The data link layer’s protocol specific header specifies the MAC address of the packet’s source and destination.

Layer 3 – Network Layer

The main responsibility of network layer is to insert information in the packet header so that it can be properly addressed and routed. Routing protocols build their routing table at this layer.NFS uses Internetwork Protocol (IP) as its network layer interface. IP is responsible for routing, directing datagrams from one network to another. The network layer may have to break large datagrams, larger than MTU, into smaller packets and host receiving the packet will have to reassemble the fragmented datagram. The Internetwork Protocol identifies each host with a 32-bit IP address. IP addresses are written as four dot-separated decimal numbers between 0 and 255, e.g., 129.79.16.40. The leading 1-3 bytes of the IP identify the network and the remaining bytes identify the host on that network. For large sites, the first two bytes represents the network portion of the IP, and the third and fourth bytes identify the subnet and host respectively.
Even though IP packets are addressed using IP addresses, hardware addresses must be used to actually transport data from one host to another. The Address Resolution Protocol (ARP) is used to map the IP address to it hardware address.

Layer 4 – Transport Layer

The transport layer provides end to end transport services and establishes the logical connection between two computers. Transport layer subdivides user-buffer into network-buffer sized datagrams and enforces desired transmission control. Two transport protocols, Transmission Control Protocol (TCP) and User Datagram Protocol (UDP) sits at the transport layer. Reliability and speed are the primary difference between these two protocols. TCP establishes connections between two hosts on the network through ‘sockets’ which are determined by the IP address and port number. TCP keeps track of the packet delivery order and the packets that must be resent. Maintaining this information for each connection makes TCP a stateful protocol. UDP on the other hand provides a low overhead transmission service, but with less error checking. NFS is built on top of UDP because of its speed and statelessness. Statelessness simplifies the crash recovery.

Layer 5 – Session Layer

The session layer is responsible for establishing session between two applications. The connection is maintained during data transfer and released once done. The session protocol defines the format of the data sent over the connections. The NFS uses the Remote Procedure Call (RPC) for its session protocol. RPC may be built on either TCP or UDP. Login sessions use TCP whereas NFS and broadcast use UDP. The session layer works similar to telephone circuitry and the three phases are connection establishment, data transfer and connection release.

Layer 6 – Presentation Layer

The presentation layer receives information from the application layer protocol and translates in the format all computers can understand. The presentation layer is not concerned with the meaning of data. This layer is also meant to handle issues related to data compression and encryption. External Data Representation (XDR) sits at the presentation level. It converts local representation of data to its canonical form and vice versa. The canonical uses a standard byte ordering and structure packing convention, independent of the host.

Layer 7 – Application Layer

The application layer works closer to the user and provides network services to the end-users. This layer does not include the actual applications but the protocols that support the applications. Mail, ftp, telnet, DNS, NIS, NFS are examples of network applications.

9 top threats to cloud computing security



9 top threats to cloud computing security

Data breaches and cloud service abuse rank among the greatest cloud security threats, according to Cloud Security Alliance

9 top threats to cloud computing security
Cloud computing has grabbed the spotlight at this year's RSA Conference 2013 in San Francisco, with vendors aplenty hawking products and services that equip IT with controls to bring order to cloud chaos. But the first step is for organization to identify precisely where the greatest cloud-related threats lie.
To that end, the CSA (Cloud Security Alliance) has identified "The Notorious Nine," the top nine cloud computing threats for 2013. The report reflects the current consensus among industry experts surveyed by CSA, focusing on threats specifically related to the shared, on-demand nature of cloud computing.
First on the list is data breaches. To illustrate the potential magnitude of this threat, CSA pointed to a research paper from last November describing how a virtual machine could use side-channel timing information to extract private cryptographic keys in use by other VMs on the same server. A malicious hacker wouldn't necessarily need to go to such lengths to pull off that sort of feat, though. If a multitenant cloud service database isn't designed properly, a single flaw in one client's application could allow an attacker to get at not just that client's data, but every other clients' data as well.
The challenge in addressing this threats of data loss and data leakage is that "the measures you put in place to mitigate one can exacerbate the other," according to the report. You could encrypt your data to reduce the impact of a breach, but if you lose your encryption key, you'll lose your data. However, if you opt to keep offline backups of your data to reduce data loss, you increase your exposure to data breaches.
The second-greatest threat in a cloud computing environment, according to CSA, is data loss: the prospect of seeing your valuable data disappear into the ether without a trace. A malicious hacker might delete a target's data out of spite -- but then, you could lose your data to a careless cloud service provider or a disaster, such as a fire, flood, or earthquake. Compounding the challenge, encrypting your data to ward off theft can backfire if you lose your encryption key.
Data loss isn't only problematic in terms of impacting relationships with customers, the report notes. You could also get into hot water with the feds if you're legally required to store particular data to remain in compliance with certain laws, such as HIPAA.
The third-greatest cloud computing security risk is account or service traffic hijacking. Cloud computing adds a new threat to this landscape, according to CSA. If an attacker gains access to your credentials, he or she can eavesdrop on your activities and transactions, manipulate data, return falsified information, and redirect your clients to illegitimate sites. "Your account or services instances may become a new base for the attacker. From here, they may leverage the power of your reputation to launch subsequent attacks," according to the report. As an example, CSA pointed to an XSS attack on Amazon in 2010 that let attackers hijack credentials to the site.
The key to defending against this threat is to protect credentials from being stolen. "Organizations should look to prohibit the sharing of account credentials between users and services, and they should leverage strong two-factor authentication techniques where possible," according to CSA.