The optimum time

Explanation and operation of web cache:

A proxy server is a server which acts like an intermediate from clients sources to other servers. A client connects the proxy server from server requesting some service such as a webpage and other resources from a different server. This may be used as an alternative to the clients of the server s response and sometimes request without maintaining server's response. A Proxy server has many purposes, including

  • To place machines behind it anonymous.
  • To make the speed up access to resources, web proxies are commonly used to cache web pages
  • For applying to access policy to network services or content.
  • To scan the output content for malware before delivery.

These have several levels and it depends on purposes, which is a proxy used for the essential one. These proxy servers may be divided into:

Transparent:

It lets a web server to understand whether there is a used proxy server and secondly comes out with an IP address of a client. This task rule for information caching and or support of internet access for many other computers.

Anonymous:

This proxy servers acts like a remote computer and will be used as a proxy and may not be use as a IP address for client.

Distorting:

Unlike the above mentioned proxy files transfer an IP address to remote web server and this address is randomly generated by a proxy server to an fixed IP address and so this will distort our IP address from the point of web server.

High anonymous:

This will not send our IP address to remote computer and also they won't inform whether there is any proxy server. So a web server thinks that this work was done directly by a client.

Operation of Content Distribution Networks:

Content Distribution Networks (CDN) is an important source in the daily enterprising networks. They are being used in broadcasting to the distribution of training and educational material. To build an effective CDN is a big challenge. The long term success of a service largely depends on its operational costs. One of our main goals is to design a service which is fully automated system to reduce the human involment in functioning this. Human involvement cannot be totally eliminated nevertheless we attempting to provide tools and systems to reduce the human involvement.

The operation of CDN falls in two categories:

Pro-active management and monitoring: It involves in monitoring all the CDN equipment with a network management tools which will generate an alarm when there is any fault noted in the CDN devices and also redirecting the system for error conditions.

Reactive Management: This will involve in determining which end of the CDN is not getting a proper signal to receive a webcast. These types of problems will be the same in a streaming scene.

In this process when an end user who is using this will select a channel and he may not receive the stream due to many number of problems. The user might overcome a desktop problem or there can be a network problem which could be a problem in the users location or in some other areas of the network. The problem can be a redirection error and at the last there could be a problem with any one of the CDN parts or with the media source.

Given these many chances of errors can take place and for a fully automated environment we have to create some set of error rectifying tools that which can be accessible to both users and support to indentifying the problem in no time. The special with the above mention tool we can easily find out the IP address of the end users and with this we can redirect the system if we have any problems from the end users. This tool will also try to test any small parts in the CDN infrastructure and which if it fails we can note that their there is a problem in CDN structure and if it pass it can be a probable multicast problem. In the same way the tool for support will check and tells us where the redirection of the system should occur based on the IP address of the end users. By the above tool we can also check that all the equipment involved in this distribution of this network is working in an effective way which will connect the CDN equipment and the end user. In implementing this process we learn some very interesting steps like how to function a CDN. If we consider that the most part of this has happened without any major issues by this we can say that the large distribution of the infrastructure of this equipment was constructed at a single location. This process was an automated which will gradually reduce every chance of miss configuration. The use of this multicast enabled the streaming CDN service to be a highly desirable and also the most capable network CDN service. While generating this most technical issues in the beginning will occur due to the use of multicast and this may occur initially with the multicast in CDN products. This is not exactly because that the functions which we have not used as their uncast counter parts and in the other case it is true for the multicasting networking equipment which are not very common in enterprise network and also it lacks the real experience for these products.

As we mentioned in the Introduction many companies are showing interest in an enterprise streaming service and my all means it is very clear that when ever there is a fault there is no such case that no individual is taking the responsibility in fixing it. The debugging tools we had described in the above helps in identifying which group is responsible for the problem and with this it will make the fault management process work in a more effective way.

And finally for the testing and operating the system we generated this for technical and business reasons. In some locations we were not able to receive the streaming content and this problem can be dealt by placing a known set of sub nets. The IP address of the end users problem can be automatically checked by these subnets which they can access the debugging tools and these will tell whether the service is ready to work for subnet or not.

Comparison and operation of a CDN with that of a web cache:

A Proxy server (or) web cache is a server which acts like an intermediate from clients sources to other servers. A client connects the proxy server from server requesting some service such as a webpage and other resources from a different server. This may be used as an alternative to the clients of the server s response and sometimes request without maintaining server's response.

A CDN is like a system of many computers which will have the copies of data kept at many points over a network so as to acquire less band width for data from the clients for the network. A client accesses the copy of memory very near to the client.

Key Differences between HTTP and FTP:

HTTP (Hypertext Transfer Protocol) and FTP (File Transfer Protocol) are the only two protocols that we are using in the Internet and also using with its own function. The main function of this HTTP is to provide a way of accessing the World Wide Web, most people doesn't recognise that they are using this protocol every time they open a web site or when they are checking their personal mails.

On the other hand FTP as the name itself tells you that it is used to transfer files from one computer to another. As this is a good option for people who want to download files there are servers which can host files and allow people to login to this site and can download huge files. As a result of this the majority of the people who uses this protocol are those who routinely upload files to websites as it provides an easy and hassle free method to maintain the website.

Principle differences between the UDP and TCP transport layer network services:

Transmission Control Protocol is a protocol which is used most commonly on the Internet as it has the option of error correction. When ever this protocol is used there is a guaranteed delivery due to the method called Flow control which is associated with it. This method determines when data needs to be sent or re-sent, and will stops the flow of the data until it gets a confirmation that the previous packets are successfully transferred. This is likely because if a packet of data is sent there is every chance that the data may collide and when this happens the client re-requests the packet data from the server until the whole packet data is same to the original data.

UDP (User Data Protocol) is also a protocol which is commonly used on the Internet. As the TCP it is never used to send confidential data such as WebPages, database information, etc; this protocol is commonly used for streaming audio and video. As this offers speed it is widely used in the streaming media such as Windows media audio files (.WMA), Real player (.RM) and others. We have to remember that UDP concentrates only with speed and as a result the streaming media is not in high quality.

Why UDP for streaming multimedia applications is controversial:

UDP (User Data Control Protocol) is a protocol which is used very frequently in place of TCP for many real time applications such as digital video. In UDP IP (Internet Protocol) is used to transport a data unit as it supports digital video because at the client end it does not divide the data stream into re assembling and will not order the data grams into the correct sequence.

RTP (Real Time Protocol) is also a UDP protocol the identification of payload, number sequence and time stamping. RTP allows for data to be sent out of order and re assembled in the correct order at the receiver end. To control the periodic packets of data to monitor the quality of the data distribution RTP will use a companion protocol named as RTCP (Real-time Control Protocol).

If we refer to any network protocols through which you can send a message without any connection with the recipient. In the UDP the host will puts the message onto the network with any desired destination address and will hope that will arrives to the destination end and this proves very expensive as it is not certain of what is happening at the destination. UDP can be an example for connection less protocols. For example if we sent a data which contains any kind of streaming video via UDP it is very uncertain that the video which you have will be received in the same way and also in the same order. Because of its connection less and also of its manner of not sending the data in the serial order which you have sent the packet data which you are sending may be lost in the middle and as a result the streaming will not be in good quality which causes a very bad result at the receiving end. Since it lacks the mechanism to control the network congestion it should depend on the other forms of network-based control mechanisms which will be responsible for the smooth flow of the traffic in a UDP network.

Though the above mentioned factors suggest that UDP is not a very useful protocol it still can be used for very few good reasons like speed and the capacity of more reliability is very essential. As it does not have the capacity of checking whether the data has received to the destination every time it is sent, as a result it makes the protocol more faster and also in a very efficient way. This User Data Protocol can be used for time sensitive applications where the missing data is preferred to late arriving data.

What is SSO?

Single Sign On is one process which will allow all the users to login for one time for what looks like to be a unique method to get access to many hosts and applications to reduce the requirement for many logins linked with many passwords. Many SSO results which are getting today will provide the required authorization and also authentication. Many authorization methods can stop the applications which user has license to do and also what the user can't do once when get logged in. The most two familiar ways for solving the SSO is proxies and tokens and both of these methods has a specified authentication server that authenticates and will break this between the users and brokers to the back end of the system. In the Token model, a single encrypted token is issued to the user after successful authentication. This token is then presented to multiple back end systems as needed. The most common instance of this type of SSO employs the Kerberos Network Authentication Protocol developed by MIT, often in conjunction with the Open Group's Distributed Computing Environment technology (DCE). Microsoft's Windows 2000 and XP use a version of Kerberos v.5 for default network authentication protocol. One of the biggest drawback's of the token approach is that it requires acceptance of the token

by all the back end systems, i.e. a back end system must be 'token enabled' to support SSO. This can prove to be a costly and time consuming exercise, since some systems may need to be re architected to accept the token. The disadvantages of the token approach led to the development of Authentication Proxy Servers. Instead of logging in directly to the application or presenting every application with a token, the user logs into the authentication proxy server. The server brokers the user authentication request by presenting the correct user credentials (i.e. password, certificate, token) to the native application, server or OS. Then the authentication proxy server manages the user access based on the native system's response. Authentication proxies are also used by Web Based SSO solutions. Web based SSO allows users to log in via a web page and then access systems and applications through the web interface. Primitive web based SSO solutions use cookies, but these are not suitable for organizations that have large, distributed web farms or complex applications that have application specific logins that are coded into the application itself. Also purely web based SSO solutions may not have support for multiple authentication methods such as smart cards and tokens or for applications that are not accessed in ways other than a web browser. To address these shortcomings, many web based SSO solutions have been upgraded and expanded to incorporate support for multiple modes of authentication and interfaces.

Four Technologies of SSO:

The four SSO technologies that which we are going to see in detail are:

  1. Microsoft's .NET Passport SSO technology
  2. Liberty Alliance Java SSO technology
  3. EToken/Account Logon SSO technology and
  4. Web SSO Technology

MICROSOFT'S .NET PASSPORT SSO TECHNOLOGY:

This SSO technology was developed by Microsoft in the year 1999 to make use of the SSO technology, it is a technology that is developed, created and also owned by Microsoft. Costa has a special server in which it contains all the user data information.

Generally the user will give the confidential data to passport and asks it to check the data. If any user is willing to buy any goods from a website, the owner will get request from this technology but no the user by any means. Then the owner will only get an ID that will be same to each user because the owner will not have the access to log any confidential user information. After that the technology which we are using will then be in contact with the credit card company for the processing of the payment and so as to the company where the product is to be delivered for shipping. From the owner's side it will be shown as like passport is buying the goods but not the unique user. This technology is very advantageous as it has no chance of hacking any kind of user's confidential data. As many others have there is a small disadvantage in using this technology because it will affect the user control rights as the user will surrender all his confidential data to the most convenience use of this service and also it has the chance of using this not in an user friendly way.

There are many websites that which are using this technology for the process of their payments and some of them are msn.com, starbucks.com, ebay.com and nasdaq.com and also it is a very simple process to become a member for this as it required a general license which should get from Microsoft. For the general usage of this account any user can set up a passport account by just giving their name and general e-mail address and also it will differ from one company to any other as they may ask for some more required information to store this in the user's profile along with their own data base records. If any user wants to use this passport technology he will be asked to form of his personal information which will have the fields to fill up like name, e-mail address, home address or office address, language, date of birth, and gender and if any one of the user wish to use this service by using a mobile phone he needs to provide an additional information like his telephone number also. As we discussed earlier in this, the main worry in using this technology will be the mishandling of users personal data and also it will differ from companies as they are the one's which will have all the users data stored in their data base. For any user before clicking on finish this will show a message like that the Microsoft .NET Passport will strictly does not allow any third parties to access to his personal data as the company Microsoft is a proud member of TRUST e privacy program which comes under an organization that will promote fair practises of customers information and also Microsoft has agreed to have its privacy and to disclose the fair information for practises by Microsoft .NET passport.

LIBEERTY ALLIANCE SSO TECHNOLOGY:

This SSO technology is a combination of 160 companies in an idea to develop an new standard for federated network identity with the help of some technical specifications. In this technology the network administrator refers to user confidential information like his name, telephone numbers, SSN numbers, home address (or) office address along with his credit card transaction records with his pos payment information. For using this technology we will fix some set of specifications for SSO technology for using in any of the websites. This technology will not monitor any of the companies business and nor will have a control on any of the customers data collected. This technology has the capability of providing products (or) services directly (or) indirectly to the users but it is like a group of companies scripts some technical specifications. In this technology it is purely the responsibility of the companies to follow and to implement the liberty specifications in a highly confidential way and also the technology provider by any means is not responsible for any misuse of implementations and their specifications by the companies which are using this site. As this technology has a structure like which can shift something from top so all the information that is provided by the user will not be stored on by any unique entity. As this technology has a associate architecture from which it can allow all the companies to link to this networks. This technology has a very advantageous service is that the users will be given all kind of permission and choice to have an control on their own confidential data and also can hide their confidential information. This technology wants to achieve a single platform for developing and controlling this identity-based website services in regard of an open industry standards.

This technology advice to the companies that which are associated to this to maintain and follow some specific guidelines for the privacy of their own data as they are responsible for collecting the user data and also to explain the users how this data will be used. In this technology the Liberty Alliance has many directions to their responsibilities for some specifications. And some of the roles include primary user, the provider of the service to operate the website, the unique provider, and the quality provider. The major thing can obtain a associated identity in which it can be authenticated by the self provider and where he can be able to take any decisions regarding the use of PII like who can use this and what are the requirements to use this. The major source has a uniqueness that gives services to the major provider. In the model of using the Internet the major provider will be the user and the service provider will be the site. The unique supplier will creates, maintain and manages unique information for the major providers. After that it authenticates the major provider to the other service providers after the major providers firstly logs in. The quality provider will provide power to a first provider and will set for the permission of the major provider.

The below diagram represent the example of the user authentication between liberty enabled websites. In this case if any user is willing to check his PII in this site he has to first log in to the specified site i.e. firstly the provider will check for the authentication and if it is verified by checking with the unique provider after which the user will get the access to check his flight details and all his other confidential data. After that if the user makes his mind to hire a car then he will also have the access to visit another website. If the customer has decided to login to site B then this site B is a liberalised site then it can directly contact the unique provider where it will authenticate the user without having to type again his all login details such as his username and password if the specified has already given those details when he was using the site A where the site A will have the capability of sharing the data which contains only his destination city information. This has also a unique feature that both the sites which we are using can handle to both of the user information for the prevention of any data collusion.

With this type of sharing the two sites which we are using will have the information that which is required for their service providers, and the site B will not have any chance of the users confidential data from the other site unless any permission was given from user A. and this also ensures that the required information will be collected and used and with this the site B can manage their business without the help of any customers confidential information like his username or password. Site A will have the access to share with other websites without losing the confidence in the data that which was given by the user.

ETOKEN/ ACCOUNT LOGON SSO TECHNOLOGY:

These are the two technologies which will store the user confidential information in their company's data base. Both of these two technologies will ask for separate username and passwords where the user has physical access to a local data base. In the EToken technology with the connection of separate USB port which is a storage device stores the data separately. In this if the user login once in this EToken software there is no need for the particular user to login again and again if he is accessing more number of websites as this software automatically logs into the website which he is using. If he is logging in to the account the data will be stored on in the computer which the user is using as it has the software that will take in charge of all the passwords of all the users different passwords and will provide them when ever the user uses that particular site and also the user has to login only once using the master password which was given to him. e token SSO is very easy in implementing and also we can use in a effective way in the solution of password management for both user and business provider as it has the proper security in reducing the cost related to passwords therefore we can increase the productivity and satisfaction of the customer.

Let us see in detail why we can use e token SSO

In this SSO the passwords and as well as the confidential data of the user can be safely stored on different types of token offering the customer with full security and soft operation of the customer with irrespective of the mode that the customer is in i.e. in offline mode or in inline mode.

This SSO technology has a very good supporting system which is provided by EToken TMS and will provide with an automatic backup qualification where the employee can restore the token information in case if he had lost or running short of memory with which the user can always stay positive irrespective of the token.

As the outcome of this is token based so we won't require any backend process which makes the work very easy and fast to implement this technology. As this is easy to implement and also with less cost it can be easily expandable to the other form of networks with the help of EToken product family.

WEB SSO TECHNOLOGY:

Web SSO technology on the other hand provides SSO capabilities to a wider user base employees, business partners accessing your applications and can extend to customers. It is easier to deploy than Enterprise SSO, as the authentication and authorisation is managed centrally at the web portal. The drawback is that the solution is limited to web based applications. This web SSO is a form of browser based application as the name it selt tells that with out Internet we cannot use this technology and also it must need the support of cookies as well to develop this technology.

Advantages and Disadvantages of SSO:

Advantages of SSO:

The benefits of reducing multiple usernames and passwords may seem obvious. But this intuitive

Reasoning needs to be backed by with analysis of the benefits and the risk. The specific benefits of reducing passwords vary across enterprises and even across departments. In addition to this,

Companies need to examine whether SSO makes sense from a hard numbers perspective. If all these questions can be answered, and the benefits outweigh the risks, then SSO may well be the technology that you need.

Reduction in Total Cost of Ownership (TCO):

How much does it cost your company every time a password is reset? Depending on the organization and the relationship of the user to the password, administration costs range between $1 and $30 for every password change (higher costs are due to instances where transaction values are high and there are many different users that need to access a central support centre, for example, users of online banking or financial services.)

Potential Increase in Security:

If users need to guess one password, there is a greater chance that they will user 'stronger' ones and a lesser chance of passwords being visible to third parties because of poor protection by the user.

Support for multiple authentication Methods:

Most companies reject stronger multi-factor authentication solutions because they fear that they will be too difficult to manage or too hard to incorporate into existing authentication systems. Choosing an SSO solution that supports multiple authentication methods can reduce the implementation costs while providing strong authentication security.

Integrating External and Internal users:

Most companies with a large web presence require authentication login support for two very different populations - internal employees and outside users. Because the security rules and policies that apply to the two groups are different, companies prefer two separate authentication databases. As the use extranets and access to remote systems grows, many user companies see a trend in companies moving towards consolidation of the two user bases. An appropriate SSO solution could enable a company to integrate and consolidate the management of these two user bases under a single system.

Some of the general advantages of SSO may include:

  • The use of SSO will help us to reduce some operational cost
  • By using this SSO we can access the in less time by reducing the time
  • As this is an improved with user experience the customer has no reason to carry any password list
  • This technology will release the weight on developers who are developing this SSO when compared to similar type of this.
  • This SSO technology has the most advanced security systems like some very effective authentication like only placing the user confidential data on one time and also the usage of smart cards in this makes more secured.
  • By using this technology we can mostly fulfil the needs of the users in an most effective way some like
  • This technology is very much use full to check all the different types of users in a very positive manner.

Disadvantages of SSO:

Consolidating passwords and logins also results in consolidating the points of risk - potentially exposing your enterprise to threats and vulnerabilities. This section will outline the potential risks and will help you balance the business benefits against these risk factors.

Single Point of Failure:

Since a single password affords access to multiple systems, if it stolen, the damage done with it will be much greater. Even if the passwords are not stolen, storing passwords on a single server makes that server a single point of attack. If the server's security is compromised, then the attacker potentially has access to all the systems the server protects.

Vendor Reliability:

Before purchasing an SSO solution, an organization should research the vendor with due care. Some questions to ask: Has the vendor had the solution reviewed by a third party audit? How long has the company been in business? How many customers does the vendor have? Will any customer act as a reference? Are there any published attacks against the vendor's solution and if so has the vendor addressed and corrected the vulnerabilities?

User and Administrator Training:

Organizations planning an SSO must also plan 'Safe Password' training classes for their employees, educating users to select passwords that are difficult to guess and explaining the importance of never sharing passwords with third parties. Users need to understand the basics of password security irrespective of SSO. In this instance, the incremental cost of SSO education needs to be measured. Also, the head count of support and administrative staff needs to be examined, in addition to training for administrators.

Cost to Implement / Complexity of the Environment:

The potentially high cost of implementation in complex environments is often overlooked by companies. There are often restrictions on what solution can be used, imposed by the existing technical landscape. Creating a single sign-on solution for an all web environment is much simpler than creating SSO in a heterogeneous landscape.

Organizations that need authentication to multiple legacy systems comprised of systems like AS/400 and S/390®, and custom applications should select a solution that supports as many of these systems as possible such as Resource Access Control Facility (RACF) from IBM® or Computer Associates' eTrust CA-Top Secret for S/390 mainframes, Pluggable Authentication Modules (PAMs) for Unix systems and Graphical Identification and Authentication (GINA) for Windows.

Also, many companies that have attempted SSO in the past may have legacy Kerberos/ DCE based authentication. If your enterprise has already spent time and money in providing some SSO via DCE, the costs of replacing the existing infrastructure need to be considered against the benefits. Or, look to an SSO provider that can incorporate the existing architecture into an overall framework that supports future needs.

Companies with dial-in or remote populations may need support for RADIUS. Companies that are using directories will probably benefit from the ability to integrate directories, via LDAP into SSO.

Wireless authentication is something that needs to be considered for long term SSO planning, as WLANs are becoming common, making authentication for wireless clients critical to SSO strategies. The IEEE 802.1x authentication standard uses the Extensible Authentication Protocol (EAP) for wired and wireless networks and supports many authentication standards such as RADIUS and Kerberos.

Another point to consider in complex architecture is the varying level of authentication needs, such as multi-factor authentication to highly restricted systems and simple password authentication for less protected ones.

Please be aware that the free essay that you were just reading was not written by us. This essay, and all of the others available to view on the website, were provided to us by students in exchange for services that we offer. This relationship helps our students to get an even better deal while also contributing to the biggest free essay resource in the UK!