Connection Between Sql Server 2005 and Vb6

Introduction Cluster is a widely used term meaning independent computers combined into a unified system through software and networking. At the most fundamental level, when two or more computers are used together to solve a problem, it is considered a cluster.

Clusters are typically used for High Availability (HA) for greater reliability or High Performance Computing (HPC) to provide greater computational power than a single computer can provide. As high-performance computing (HPC) clusters grow in size, they become increasingly complex and time-consuming to manage. Tasks such as deployment, maintenance, and monitoring of these clusters can be effectively managed using an automated cluster computing solution.

Cluster Computing: the Journal of Networks, Software Tools and Applications provides a forum for presenting the latest research and technology in the fields of parallel processing, distributed computing systems and computer networks.

The components of a cluster are usually connected to each other through fastlocal area networks, each node (computer used as a server) running its own instance of an operating system. Clusters are usually deployed to improve performance and availability over that of a single computer, while typically being much more cost-effective than single computers of comparable speed or availability.[1]

Cluster Computing vs. Grid Computing Cluster Computing Characteristics * Tightly coupled computers. * Single system image. * Centralized job management and scheduling system. Cluster computing is used for high performance computing and high availability computing. Grid Computing Characteristics * Loosely coupled. * Distributed JM & scheduling. * No SSI.

Advantages Of Cluster Computing

Easy to deploy The cluster computing system is very easy to deploy. In this system software is installed as well as configured automatically. Using web interface, the cluster nodes can be easily added and managed and hence, reduces efforts and saves time. Complete

The cluster computing system is a rich set of software’s which include common HPC (High Performance Computing) tools. It is a web-based management containing cluster monitoring, reporting and alerting automatically. Open

As there are no proprietary “lock-in”, it is an open system. It is very cost effective to acquire and manage and has various sources of support and supply. The system also supports multiple standard provisioning methods. Easy to manage

The system is very easy to manage as there is no need to edit shell scripts or XML templates. It changes node group definitions and maintain several software versions with ease. It takes the risk out of software and hardware upgrades as it supports them without upgrading the installer node. Flexible

As the cluster computing is an open system, it is very flexible. It supports real-world topologies and synchronizes the cluster files without re-installation. The system easily utilizes the power of advanced GPUs (Graphic Processing Units) for general HPC calculations. It can change software configurations at any time.

Expandable It is very east to add new, future hardware models and cluster node at any time. It is easily upgrade to Platform LSF which has proven scalability to 10,000+ CPUs. The commercial add-on-solution makes the cluster growth possible in size and sophistication. Supported

The system is very supportive as it includes software updates. There is a single point of contact for a fully integrated software and hardware solution. You can enjoy peace of mind with a fully supported computing system.

High Server Availability It is always desirable to achieve high server availability. For example, if a Message Switch fails when it is transferring some messages, those messages will remain "stuck" on the Message Switch until it is repaired. In some service environments, such a delay would be unacceptable. Fail-over clustering is designed to provide very high server availability, for environments with this type of service requirement.

In a failover cluster, there are two computers (or occasionally several computers). One (primary) provides the service in normal situations. A second (failover) computer is present in order to run the service when the primary system fails. The primary system is monitored, with active checks every few seconds to ensure that the primary system is operating correctly.

The system performing the monitoring may be either the failover computer or an independent system (called the cluster controller). In the event of the active system failing, or failure of components associated with the active system such as network hardware, the monitoring system will detect the failure and the failover system will take over operation of the service. A key element of the fail-over clustering approach, is that both computers share a common file system. One approach is to provide this by using a dual ported RAID (Redundant Array of Independent Disks), so that the disk subsystem is not dependent on any single disk drive. An alternative approach is to utilize a SAN (Storage Area Network). LOAD BALANCING SERVER :

Solution Install your service or application onto multiple servers that are configured to share the workload. This type of configuration is a load-balanced cluster. Load balancing scales the performance of server-based programs, such as a Web server, by distributing client requests across multiple servers. Load balancing technologies, commonly referred to asload balancers, receive incoming requests and redirect them to a specific host if necessary.

The load-balanced hosts concurrently respond to different client requests, even multiple requests from the same client. For example, a Web browser may obtain the multiple images within a single Web page from different hosts in the cluster. This distributes the load, speeds up processing, and shortens the response time to clients. Load balancers use different algorithms to control traffic. The goal of these algorithms is to intelligently distribute load and/or maximize the utilization of all servers within the cluster. Some examples of these algorithms include:

 Round-robin. A round-robin algorithm distributes the load equally to each server, regardless of the current number of connections or the response time. Round-robin is suitable when the servers in the cluster have equal processing capabilities; otherwise, some servers may receive more requests than they can process while others are using only part of their resources.  Weighted round-robin. A weighted round-robin algorithm accounts for the different processing capabilities of each server.

Administrators manually assign a performance weight to each server, and a scheduling sequence is automatically generated according to the server weight. Requests are then directed to the different servers according to a round-robin scheduling sequence.  Least-connection. A least-connection algorithm sends requests to servers in a cluster, based on which server is currently serving the fewest connections.  Load-based. A load-based algorithm sends requests to servers in a cluster, based on which server currently has the lowest load.