Site home page Get alerts when Linktionary is updated Book updates and addendums Get info about the Encyclopedia of Networking and Telecommunicatons, 3rd edition (2001) Download the electronic version of the Encyclopedia of Networking, 2nd edition (1996). It's free! Contribute to this site Electronic licensing info
|
Distributed Computer Networks Related Entries Web Links New/Updated Information Note: Many topics at this site are reduced versions of the text in "The Encyclopedia of Networking and Telecommunications." Search results will not be as extensive as a search of the book's CD-ROM. Distributed computer networks consist of clients and servers connected in such a way that any system can potentially communicate with any other system. The platform for distributed systems has been the enterprise network linking workgroups, departments, branches, and divisions of an organization. Data is not located in one server, but in many servers. These servers might be at geographically diverse areas, connected by WAN links. Figure D-26 illustrates the trend from expensive centralized systems to low-cost distributed systems that can be installed in large numbers. In the late 1980s and early 1990s, distributed systems consisted of large numbers of desktop computers. Today, the Internet and Web technologies have greatly expanded the concept of distributed systems. The Web is a "massively distributed collection of systems," to paraphrase a 3Com paper mentioned in the next section. It consists of countless nodes ranging from servers, to portable computers, to wireless PDAs, not to mention embedded systems that largely talk to one another without human intervention. Figure 26: See book A paper written by Simon Phipps of IBM (see the link on the related entries page) discusses how distributed computing systems have been removing dependencies in the computing environment as follows:
Networks built with Web technologies (i.e., intranets and the Internet) are truly advanced distributed computing networks. Web technologies add a new dimension to distributed computing. Web servers provide universal access to any client with a Web browser. The type of computing platform and operating system become less important, while communication and information exchange without limits takes hold. A distributed environment has interesting characteristics. It takes advantage of client/server computing and multitiered architectures. It distributes processing to inexpensive systems and relieves servers of many tasks. Data may be accessed from a diversity of sites over wired or wireless networks. Data may be replicated to other systems to provide fault tolerance and place data close to users. Distributing data provides protection from local disasters. The distributed environment needs the following components:
As mentioned, the Web is the ultimate distributed computer system. You can access Web servers all over the world that offer a nearly unlimited amount of content. Directory services help you locate sites. Search engines catalog information all over the Web and make it available for your queries. Caching techniques and "content distribution" are moving information closer to users. Massively Distributed Systems 3Com has an interesting paper called "Massively Distributed Systems" by Dan Nessett (see the Web link on the related entries page). The paper talks about the trend from high-cost centralized systems to distributed low-cost, high-unit-volume products, to massively distributed systems that are everywhere and that often "operate outside the normal cognizance of the people they serve." This paper is highly recommended for those who want to understand trends in distributed computing. Nessett discusses two approaches to distributed processing. One method is to move data to the edge processors, as is done with the Web and Web-based file systems. The other approach is to move processing to the data, as is done with active networking and Java applets (e.g., objects move within the distributed system and carry both code and data). If the object consists primarily of data, it will closely approximate moving data to the processing. If it consists primarily of code, it will closely approximate moving processing to the data. Yet another approach is the thin-client approach, in which users work at graphical terminals connected to servers that perform all processing and store the user's data. See "Thin Clients." The World Wide Web is a massively distributed system full of objects. There are Web sites containing documents that contain both objects and referrals to other objects. Nessett talks about how the presentation of massively distributed objects to technically naïve users will require new interfaces. One example is to represent objects in virtual spaces that users navigate through as if walking through a 3D world. Distributed and Parallel Processing One aspect of distributed computing is the ability to run programs in parallel on multiple computers. There is distributed parallel processing, which is best described as multiprocessing that takes place across computers connected via LANs or the Internet. There is also dedicated parallel processing, which is best described as multiprocessing that takes place on systems that are locally attached via a high-speed interface. The former is discussed here because it represents a truly distributed processing environment. The latter is discussed under "Multiprocessing" and "Supercomputer." Distributed parallel processing across multiple computer systems requires an authoritative scheduling program that can decide where and when to execute parts of a program. Distribution of tasks may take place in real time or on a more relaxed schedule. For example, distributed processing has been used to crack encrypted messages. Distributed.net is a project that employs thousands of users and their computers to crack codes. Users receive a small program that communicates with Distributed.net's main system, which distributes pieces of the challenge to users. The program runs when the user's computer is idle and returns its results to the main computer when done. The main computer eventually compiles all the results submitted by all the computers. Distributed.net claims its network of users has the "fastest computer on Earth." HTC (high-throughput computing) environments are large collections of workstations, often called "grid environments." The Globus Project is an HTC project that helps scientists use idle cycles on pools of workstations and supercomputers. The system is based on Condor, a proven system that has been used to harness idle workstation time on LANs. The Web sites for Globus and Condor are listed on the related entries page. Copyright (c) 2001 Tom Sheldon and Big Sur Multimedia. |