Thursday, January 19, 2012

Distributed System by Defining its Properties © 11/12/2011

Introduction: To define and explain a distributed system by defining it properties

 Within the system is found single entities which survive on its own with its individual capabilities and components that make up its own properties however it has the capabilities to communicate with others alike by passing messages from point to point. The system and all the single entities maintain one common goal, which is to work together for a relative commonality yet still work independently of one another.  
The attribute of this article is to explain two broad distributed system models and defining their properties.
There are two principal reasons for using distributed systems. Firstly, the very nature of the application may necessitate the use of a communication network that links several computers.
Secondly is a network capable computer that works individually, independently working off the network however still benefiting in practical on the network scenarios.
 A distributed system is beneficially more reliable more cost effective and can obtain a healthy level of performance by using a collection of numerous low-end computers, in comparison with a single high-end computer.
The hybrid distributed system or a monolithic uniprocessor system and otherwise known as a Fault tolerant cell array. Which is a system containing an array of computers with individual aspects. Supper fast multithreading and multi core processers running servile programs at the same time, less bottlenecks and it will support massive parallel data processing architectures into a single monolithic entity.
 The workstation server
A server is an application or device that performs service for connected clients as part of client server architecture. It can also be a computer system that has been designated for running a specific server application. A server can also serve applications to users on an intranet. (Compair, 2011)
Servers will keep files organized and provide applications and share points, and connect those in the system to the internet.
Workstations themselves are typically designed for specific high-end computers found in graphics design, audio and video editing typically speaking.
A typical organization such as the Government will have a large number of computer workstations, laptops and client desktops dispersed over wide geographical area and bridging across the globe. The premise would be to have each unit configured with an operating system that may include similar and dissimilar features hardware and software configurations pending on the geographical area in which the unit would be deployed with one or more common applications that will interact with the main entity (the server) by using the World Wide Web or an Intranet.
The providing advantages the implementations of multiple unites spread across the globe allows information to be gathered from each location and transmitted back to the main server.
Processer Pool
The shared processer pool allows an assigned partial partition to a specific user or group of users. This concept is done through the logical makeup of a server or server array. These processes can also be implemented logically into a global distribution system also. Commonly could be labeled as a Share drive and or server where one or more drive partitions in a single drive or physical drives are assigned. Each group or single unit will be allocated to that assigned space. Assigned logical partitions are basically configured shared processor pools. You can assign a logical partition to a shared processor pool at the time that you create the logical partition, or by reassigning existing logical partitions.
Common workload issues
IBM has been powering processer pools for a long time since the POWER5 and now POWER6 processors came along. They were designed to handle multiple shared processor pools. Unlike UNIX and some other distributed systems are likely dedicated to single applications, designed to handle the demand of high and low peaks of services or common workload. Units using the pooling system will take advantage of having less traffic throughout its connection to the server so the only high demand will be among those using that partition. By creating separate shared-processor pools and putting the applications that have the same or similar software needs together provides a differential licensing-cost reduction. They allow greater control of the percentage of processor resources. They can be used to isolate the resources that are available to the units within the shared pools. They also can be used to separate workloads.
Maintenance and security can be most difficult and make things seem like having multitudes of units a disadvantage. Within a closed system or an open system such as globally deployed units the security risks are high, granting the open units security is higher. The obvious disadvantages of this type are replacements, problematic fixes, enforcing codes, updates, viruses, and things like theft. All of the disadvantages can be addressed and remedied by implementing policy, secure logon apps like “Smart off” and “Tokens”, and general operating and security classes by addressing proper behavioral aspects needed while using all government computers.    
 This is primarily used in high work load operations whereas many are in the same organization yet having different jobs with all the information leading back to a main controlled server. Nodes may makeup in part of any type of server along with separate nodes known as print servers.
The needs the wants and the things that are deployed into a system great or small are allocated by the nature or environment of which a system will operate.


Works Cited
Compair. (2011). Retrieved 11 18, 2011, from Diffen: http://www.diffen.com/difference
Andrew Tanenbaum. (2007). Distributed Systems. Upper Saddle Rive, NJ, USA: Pearson Prentice Hall.
Tony Sintes. (2002, 08 23). App server, Web server: What's the difference? Retrieved 11 18, 2011, from JavaWorld.com, 08/23/02: http://www.javaworld.com/javaqa/2002-08/01-qa-0823-appvswebserver.html?page=2

Security and the Internet & Distributed System By Kenneth A Brewer © 11/23/2011

Drawbacks and disadvantages inherent with distributed computing are somewhat disturbing to the system owner/admin.
Drawbacks to what is seemingly perfect by having a distributed system doing all the work, transferring data around the globe and back again. What could go wrong?
Lots if the system is not secure. Security is the number one vulnerability in anything that you want to keep out of prying hands. If the data is streamed on a dedicated line or pipe the   transference of secure data across the state to the mainframe is vulnerable at many points along that pipe. Wi-Fi could be a dangerous entry point. Keeping the net-work closed with firewalls and passwords that change every month helps prevent loss.
New data and software installs designed primarily for a specific organization could place the system at a risk. It could evolve backdoors, key-loggers, virus, and spy-ware. So the administrators have to play a big part in the development of Organizational specific software.
Diagnostics, IT Admins, Managers, and Controllers, to aid in the defense of theft, maintenance, policy issues, IT personal and IT issues. Have to be in place. This all comes to a hefty cost to many organized establishments. Pending on the risk factors involved determines the amount of IT along with the amount of control is or will be needed to maintain the system.

Distributed Architecture within a Distributed System By Kenneth A Brewer © 11/28/2011

Information is essential to the accomplishment of government structural objectives. It’s acknowledged that reliability, integrity, and availability are significant concerns. The use of networks, principally the Internet, is modernizing the way governments’ conducts business, commerce of trade, industry and is proven essential for its future.
 In the same manor factors that a system can be beneficial also pose detrimental threats when not properly controlled, and will be left to vulnerable attacks. The benefits of speed, reliability and, creative design have imposed unparalleled risks to government operations. Computer security has, in turn, become extremely important in all levels of government that operate information systems. The fraud factor along with, sabotage, and malicious intents, natural disasters and unintentional errors by official computer users can pose overwhelming concerns if information resources are not protected. Such exposure will pose significant risks to data systems, information, and to the critical operations of the infrastructures they support.
Security measures are necessary to avoid data tampering, deception, interferences in critical operations. The unknowing threats of the inappropriate disclosure of sensitive information from an innocent conversation between friends also add to the risk factor. The effective use of computer security becomes indispensable when reducing the risk of malicious attacks.
Policy is put into place to help safe guard form inside and outside threats. Government Accounting Office (GAO) or the Operation and Investigations (O&I teams) will evaluate and audit security policy’s, offer recommendations for the reduction of risk factors to an acceptable level. Provided in memorandums and handbooks The GAO has issued guidelines governing the policy’s used in Data Distribution Systems within the government.
My notes: If serious about Distributed IT Designs; there are a wealth of Government guidelines for the security and the development of sophisticated systems both at the Library and online. K.B.
Works Cited


GAO, N. &. (2001). Management Planning Guide for Information System Security Auditing. Washington DC: GAO, NSAA.

Title Distributed Design By Kenneth A Brewer © 11/11/2011

Attributes:
Present a 2-3 page presentation of some key features of what should be found within a distributed computing configuration to the Washington DC Consulting team.

The premise of an IT distributed design is to implement a logical organization through automation and process. The design must allow Value, Responsibility, and Growth potentials along with experience in design development deployment and service.
Proper architecture pattern design is among the key features in development of any distributed computing design. The rapid success by its nature, sophisticated systems have grown from independent creativity of applications to Internet-connected networks of numerous network types and managed services, now known as the cloud.
The creation of successful distributed systems will involve how the architecture design would address its accountings within the design its development; how it will be maintained, supported, and governed.
Sophisticated distributed systems will involve these capabilities.
Implementation of applications and or services.
The physical applications will include Communications equipment and wiring Computers, laptops, and similar devices along with backups and archives of sensitive data such as personnel and legal records
Use of services from within as well as external.
Passwords, configuration utilities, confidentiality, and data integrity.
Management of services including risk management
Unauthorized deletion or modification Unauthorized disclosure of information
Database penetration from Hackers, Malware, Trojans, Viruses, and Worms.
Bugs in software Apps
Physical normality’s; floods, earthquakes and human interventions.
Organizations will manage applying methods, patterns, and technologies to model systems which enable proper control.
The scalability is important to any system. Expandability should be incorporated into the design as well. The Government and Companies will not stay the same size; they expand and shrink at any given notice.
Adopting a design is a daunting tasks within tis self. It is important to adopt the working patterns of the environment to which it will be deployed. The architecture will include Key Architectural Styling; Layered, Object-based, Data-centered, and Event-based architectures. (Andrew Tanenbaum, 2007)
Distribution systems consist of internal components such as computers. Computers that can distribute information within the confines of a facility, commonly manufacture, business office, Colleges or Government Campuses. It will consist of a collection of independently working computers which characterization will represent a single functioning operating system. One trunk with many branches many roots all functioning to support the trunk. The distribution system therefore would be classified and named “Tree”.    
The importance or characterization function of any system would show a collaboration known as a relationship between more than one computer including a server and autonomous devices. Devices sometimes called nodes are inline servers making up an array of sensor networks within the main works of the system. Still a mainframe of the system will make up the concentration center of any network. The usefulness of nodes within the network will take care of localized information traffic help distribute the work load thus supporting the mainframe relieving it from over tasking.
 Here is an example: Large corporate companies and the Government will have one or two sites that will support a mainframe. The nodes would be at local sites supporting within that site. A distribution Pipe line as we call it will send information to the main site where IT and it support teams is located. Information can be sent with constancy or updated on a time schedule. This frees up the mainframe from being over tasked and any given moment allowing the steady controlled digestion rate of information to be handled.
Naturally the importance of security cost and monitoring of a system must be considered in its development focusing on numerous aspects that can be followed in order to protect Government proprietary policies.
Firstly taking a look at the assets and level of protection needed evaluating the risk to each asset. Then taking a look at the cost evaluation verses the level of need to protect against threats.
Keep in mind protection is present when human intervention is observed policy is up held. There are too many time is information is lost when a PC is left unattended or on lookers observe what they shouldn’t have.
Generalizing in a key focus important issues, is the exploring the possibilities of compromise, which is called risk assessment. Risk management may include devising the plan, implementing policy and deployment to include deployment planning and most of all training and following up with scheduled training.  
 A conclusion with thought; A distribution system can work over an Intranet, L A N, W A N, local and world over incorporating application layers; User-interface, processing, and data handling. They are capable of being both centralized, and global. Information must be easily read, stored, managed and shared. (Andrew Tanenbaum, 2007) They must be empowered to change, share and be controlled. Attributes must include well maintained balance connectivity with complexity and dissimilar networks systems and devices throughout a managed controlled automation.
   









Bibliography

Microsoft Support, Revision: 5.4. (2006, Febuary 4). (Microsoft, Producer, & Microsoft ) Retrieved July 4 , 2011, from Microsoft Support: http://support.microsoft.com/kb/230125
Internet Protocol Suite. (2011). Retrieved April 16, 2011, from Answers.com: http://www.answers.com/topic/tcp-ip#ixzz1K11MBYWB
Andrew Tanenbaum, M. V. (2007). Distributed Systems. Upper Saddle River, NJ, USA: Pearson Prentice Hall.
Benson, C. (2011). Best Practices for Enterprise Security . (G. Berg, Editor, I. C. Ltd, Producer, & Microsoft ) Retrieved 11 09, 2011, from MS Tech Net: http://technet.microsoft.com/en-us/library/cc723503.aspx
Erbschloe, M. (NOV-2004). Physical Security for IT. In M. Erbschloe, Physical Security for IT. Washington, DC: DIGITAL PRESS.