Facebook: According to the definition a distributed system is a collection of independent computers that appears to it’s user as a single coherent systems. To a facebook user it does appear as a single system on their monitor. Millions of user uses it at once. So it has multiple processing systems to handle the entire user. It takes less time to load their page. It may fall on the server side, but the user does not face any embarrassing situation because of it. It has a great fault tolerance system. It runs continuously without any kind of failure. It is a highly reliable system. This runs without failure. If for any reason it fails to operate, there is a backup system to keep it running.
The system is also easy to maintain and repair. Day by day as the numbers of user are increasing, the server of facebook is also increasing their resources. By adding more processor and other resources makes it more strong and powerful. All the data of the user’s are distributed all over the world to their allocated server.
All the data are also replicated. Replication not only increases availability, but also helps to balance the load between components leading to better performance. Also, in geographically widely dispersed systems, having a copy nearby can hide much of the communication latency. So if one server fails, the main systems dose not loses all the data from their system. Facebook also have a middle ware to communicate between all the systems (windows, MAC OS, android/IOS). Facebook is open to all. All the systems are distributed all over the world.
PHONE NETWORK: We all relay on telephone network. It has become a part of our daily life. It is also a distributed system. All the exchange of the telephone networks are connected to each other. When a user talks with other user through the telephone line, it seems to him there is one single line for theme. Thousands people use the phone line at the same time. But it does not collapse. It has lots of multi processing system to keep the system perform faster. This system might face some technical difficulties. But there’s always a backup system to support the user. They never feel any embarrassing situation from network failure. It is a highly reliable system. .
This runs without failure. If for any reason it fails to operate, there is a backup system to keep it running. The system is also easy to maintain and repair. All the data and resources are allocated on every exchange remotely. All the data are also replicated. Replication not only increases availability, but also helps to balance the load between components leading to better performance. Also, in geographically widely dispersed systems, having a copy nearby can hide much of the communication latency. It also has middleware. This helps the system to communicate with different type of phones.
Internet Internet routing protocols (BGP, OSPF, RIP) have traditionally favored responsiveness over consistency. A router applies a received update immediately to its forwarding table before propagating the update to other routers, including those that potentially depend upon the outcome of the update. Responsiveness comes at the cost of routing loops and black holes—a router A thinks its route to a destination is via B but B disagrees. By favoring responsiveness (a liveness property) over consistency (a safety property), Internet routing has lost both. Our position is that consistent state in a distributed system makes its behavior more predictable and securable.
To this end, we present consensus routing, a consistency first approach that cleanly separates safety and liveness using two logically distinct modes of packet delivery: a stable mode where a route is adopted only after all dependent routers have agreed upon it, and a transient mode that heuristically forwards the small fraction of packets that encounter failed links. Somewhat surprisingly, we find that consensus routing improves overall availability when used in conjunction with existing transient mode heuristics such as backup paths, deflections, or detouring. Experiments on the Internet’s AS-level topology show that consensus routing eliminates nearly all transient disconnectivity in BGP.
Cloud A collection of hardware and software systems that contain more than one processing or storage element but appearing as a single coherent system running under a loosely or tightly controlled regime is called Distributed Computing. The computers in the distributed system do not share a memory instead they pass messages asynchronously or synchronously between them.
This is a type of segmented or parallel computing that runs on a heterogeneous system. Cloud Computing refers to both the applications delivered as services over the Internet and the hardware and systems software in the datacenters that provide those services. The services are referred to as Software as a Service (SaaS). The datacenter hardware and software is classified as Cloud. When a Cloud is made available in a pay-as-you-go manner to the general public, it is a Public Cloud and the service that is sold is Utility Computing.
The term Private Cloud is used to refer to internal datacenters of a business or other organizations not made available to the general public. Thus, Cloud Computing is the sum of SaaS and Utility Computing, but does not include Private Clouds. People can be users or providers of SaaS, or users or providers of Utility Computing.