Saturday, December 4

Network monitoring

The term network monitoring describes the use of a system that constantly monitors a computer network for slow or failing components and that notifies the network administrator (via email, pager or other alarms) in case of outages. It is a subset of the functions involved in network management.


Details

While an intrusion detection system monitors a network for threats from the outside, a network monitoring system monitors the network for problems caused by overloaded and/or crashed servers, network connections or other devices.
For example, to determine the status of a webserver, monitoring software may periodically send an HTTP request to fetch a page. For email servers, a test message might be sent through SMTP and retrieved by IMAP or POP3.
Commonly measured metrics are response time, availability and uptime, although both consistency and reliability metrics are starting to gain popularity. The widespread addition of WAN optimization devices is having an adverse effect on most network monitoring tools -- especially when it comes to measuring accurate end-to-end response time because they limit round trip visibility.
Status request failures - such as when a connection cannot be established, it times-out, or the document or message cannot be retrieved - usually produce an action from the monitoring system. These actions vary -- an alarm may be sent (via SMS, email, etc.) to the resident sysadmin, automatic failover systems may be activated to remove the troubled server from duty until it can be repaired, etc.
Monitoring the performance of a network uplink is also known as network traffic measurement, and more software is listed there.

Network Tomography

Network tomography is an important area of network measurement, which deals with monitoring the health of various links in a network using end-to-end probes sent by agents located at vantage points in the network/Internet.

Route Analytics

Route analytics is another important area of network measurement. It includes the methods, systems, algorithms and tools to monitor the routing posture of networks. Incorrect routing or routing issues cause undesirable performance degradation or downtime.

Various types of protocols

Website monitoring service can check HTTP pages, HTTPS, SNMP, FTP, SMTP, POP3, IMAP, DNS, SSH, TELNET, SSL, TCP, ICMP, SIP, UDP, Media Streaming and a range of other ports with a variety of check intervals ranging from every four hours to every one minute. Typically, most network monitoring services test your server anywhere between once-per-hour to once-per-minute.

Servers around the globe

Network monitoring services usually have a number of servers around the globe - for example in America, Europe, Asia, Australia and other locations. By having multiple servers in different geographic locations, a monitoring service can determine if a Web server is available across different networks worldwide. The more the locations used, the more complete is the picture on network availability.



(source:wikipedia)

Network management

Network management refers to the activities, methods, procedures, and tools that pertain to the operation, administration, maintenance, and provisioning of networked systems. 
Operation deals with keeping the network (and the services that the network provides) up and running smoothly. It includes monitoring the network to spot problems as soon as possible, ideally before users are affected.
Administration deals with keeping track of resources in the network and how they are assigned. It includes all the "housekeeping" that is necessary to keep the network under control.
Maintenance is concerned with performing repairs and upgrades—for example, when equipment must be replaced, when a router needs a patch for an operating system image, when a new switch is added to a network. Maintenance also involves corrective and preventive measures to make the managed network run "better", such as adjusting device configuration parameters.
Provisioning is concerned with configuring resources in the network to support a given service. For example, this might include setting up the network so that a new customer can receive voice service.
A common way of characterizing network management functions is FCAPS—Fault, Configuration, Accounting, Performance and Security.
Functions that are performed as part of network management accordingly include controlling, planning, allocating, deploying, coordinating, and monitoring the resources of a network, network planning, frequency allocation, predetermined traffic routing to support load balancing, cryptographic key distribution authorization, configuration management, fault management, security management, performance management, bandwidth management, Route analytics and accounting management.
Data for network management is collected through several mechanisms, including agents installed on infrastructure, synthetic monitoring that simulates transactions, logs of activity, sniffers and real user monitoring. In the past network management mainly consisted of monitoring whether devices were up or down; today performance management has become a crucial part of the IT team's role which brings about a host of challenges—especially for global organizations.
Note: Network management does not include user terminal equipment.


Technologies

A small number of accessories methods exist to support network and network device management. Access methods include the SNMP, command-line interface (CLIs), custom XML, CMIP, Windows Management Instrumentation (WMI), Transaction Language 1, CORBA, NETCONF, and the Java Management Extensions (JMX).
Schemas include the WBEM, the Common Information Model, and MTOSI amongst others.
Medical Service Providers provide a niche marketing utility for managed service providers; as HIPAA legislation consistently increases demands for knowledgeable providers. Medical Service Providers are liable for the protection of their clients confidential information, including in an electronic realm. This liability creates a significant need for managed service providers who can provide secure infrastructure for transportation of medical data.

See also



(source:wikipedia)

Integrated business planning

Integrated business planning (IBP) refers to the technologies, applications and processes of connecting the planning function across the enterprise to improve organizational alignment and financial performance. IBP accurately represents a holistic model of the company in order to link strategic planning and operational planning with financial planning.
By deploying a single model across the enterprise and leveraging the organization’s information assets, corporate executives, business unit heads and planning managers use IBP to evaluate plans and activities based on the true economic impact of each consideration.

History of IBP

The roots of IBP date back to 1996 where Dr. Robert Whitehair, working at the University of Massachusetts, developed technology for capturing and exploiting expert knowledge. Dr. Whitehair worked in close collaboration with several colleagues, including Professor Igor Budyachevsky of the Russian Academy of Science, to develop a technology now called COR (Constraint Oriented Reasoning). COR technology was used to capture expert knowledge; then embed it in applications that allow users to leverage it through a natural language interface.
Using grant funding from corporate giants such as Chase, DuPont, General Electric, PricewaterhouseCoopers, Shell and the Williams Company, Dr. Whitehair captured expert knowledge from numerous disciplines and introduced an application for business analysis that empowered decision makers.

Components of IBP


As illustrated right, planning is integrated across the enterprise, which enables decision makers to identify the activities that deliver the greatest financial impact across the company.
Recent developments and successes in the areas of business intelligence and performance management are accelerating the adoption of integrated business planning. While IBP has been a vision for many years; the technology required for modeling, optimization and scaling was non-existent. Dr. Robert C. Whitehair of River Logic, considered to be the father of IBP[citation needed], used constraint-oriented reasoning (COR) and knowledge-based rules engines to generate mathematical representations of planning constraints and variables; thereby making IBP a reality.
Dr. Whitehair, working in collaboration with scientists in the U.S. and the Russian Academy of Science, solved the problem of scaling real-life situations in mathematical equivalents. Today, IBP software easily runs thousands of analyses of a mathematical representation of ~1,000,000 equations, each in excess of 1,000,000 variables, in a typical solve.
In broad terms, the use of mathematical representations and extensive knowledge bases enable users to build the massive, multivariable models required for Integrated Business Planning.

Analyses
Companies use IBP to translate insight into financial impact by providing analyses such as: Identification of top financial (profit) drivers
Answers to “what-if” questions
Simulation
Optimization to any variable or ratio, including balance sheet, profitability, NPV, cash flow, etc
Intelligent sensitivity analysis
Modeling infeasibilities
Understanding of unique performance driver relationships
Opportunity costs and marginal economic value

Benefits
IBP transforms planning into a decisive competitive advantage by:
Providing an integrated planning platform across marketing, operations and finance
Generating a holistic understanding of performance drivers
Quantifying the financial impact and interdependencies across planning alternatives
Optimizing strategic planning and resource allocation
Balancing sales and operations planning for profitability
Quantifying financial risk
Increasing business flexibility

IBP Applications

IBP has been used to successfully model and integrate the planning efforts in a number of applications, including:
Product profitability
Customer profitability
Capital expenditures
Manufacturing operations
Supply chain
Business processes (human and information-based)
Business policy
Market demand curves
Competitive strategy


(source:wikipedia)

Network administrator

A network administrator is a person responsible for the maintenance of computer hardware and software that comprises a computer network. This normally includes deploying, configuring, maintaining and monitoring active network equipment. A related role is that of the network specialist, or network analyst, who concentrates on network design and security.
The network administrator (or "network admin") is usually the level of technical/network staff in an organization and will rarely be involved with direct user support. The network administrator will concentrate on the overall integrity of the network, server deployment, security, and ensuring that the network connectivity throughout a company's LAN/WAN infrastructure is on par with technical considerations at the network level of an organization's hierarchy. Network administrators are considered Tier 3 support personnel that only work on break/fix issues that could not be resolved at the Tier1 (helpdesk) or Tier 2 (desktop/network technician) levels.
Depending on the company, the Network Administrator may also design and deploy networks. However, these tasks may be assigned to a network engineer should one be available to the company. A network engineer designs, implements, and/or troubleshoots computer networks. In general, a network engineer will not regularly perform system administration tasks, but will instead concentrate on high-level network related duties such as network architecture, network design, choosing of network devices, and network policies. Network engineers will rarely be involved with direct user support, as they are normally placed as tier three support.[1].
The actual role of the Network Administrator will vary from company to company, but will commonly include activities and tasks such as network address assignment, assignment of routing protocols and routing table configuration as well as configuration of authentication and authorization – directory services. It often includes maintenance of network facilities in individual machines, such as drivers and settings of personal computers as well as printers and such. It sometimes also includes maintenance of certain network servers: file servers, VPN gateways, intrusion detection systems, etc.
Network specialists and analysts concentrate on the network design and security, particularly troubleshooting and/or debugging network-related problems. Their work can also include the maintenance of the network's authorization infrastructure, as well as network backup systems.
The administrator is responsible for the security of the network and for assigning IP addresses to the devices connected to the networks. Assigning IP addresses gives the subnet administrator some control over the personnel who connect to the subnet. It also helps ensure that the administrator knows each system that is connected and who is personally responsible for the system.

Duties of a network administrator

Many organizations use a three tier support staff solution, with tier one (help desk) personnel handling the initial calls, tier two (technicians and pc support analysts) and tier three (network administrators). Most of those organizations follow a fixed staffing ratio, and being a network administrator is either the top job, or next to top job, within the technical support department.
Network administrators are responsible for making sure that the computer hardware and network infrastructure for an IT organization is properly maintained. They are deeply involved in the procurement of new hardware (For example: Does it meet existing standardization requirements? Does it do the job required?), rolling out new software installs, maintaining the disk images for new computer installs (usually by having a standardized OS and application install), making sure that licenses are paid for and up to date for software that need it, maintaining the standards for server installations and applications, and monitoring the performance of the network, checking for security breaches, poor data management practices and more.
Most network administrator positions require a breadth of technical knowledge and the ability to learn the ins and outs of new networking and server software packages quickly. While designing and drafting a network is usually the job of a network engineer, many organizations roll that function into a network administrator position as well.
One of the chief jobs of a network administrator is connectivity. Network administrators are in charge of making sure that connectivity works for all users in their organization, and for making sure that data security for connections to the internet is properly handled. (For network administrators doing security aspects, this can be a full time job.)
Trouble tickets work their way through the help desk, then through the analyst level support, before reaching the network administrator's level. As a result, in their day-to-day operations, network administrators should not be dealing directly with end users as a routine function. Most of their jobs should be on scheduling and implementing routine maintenance tasks, updating disaster prevention programs, making sure that network backups are run and doing test restores to make sure that those restores are sound.
Retrieved from "http://www.articlesbase.com/computers-articles/network-administrator-duties-and-functions-1304080.
[edit]Training and certifications for network administrators

Cisco CCENT
Cisco CCNA
Cisco CCIP
Cisco CCNP
Cisco CCIE
Microsoft Certified System Administrator
Microsoft Certified System Engineer
Microsoft Certified IT Professional
CompTIA's Network+
Red Hat Certified Engineer (RHCE)


(source:wikipedia)

Electronics

Electronics is the branch of science and technology which makes use of the controlled motion of electrons through different media. The ability to control electron flow is usually applied to information handling or device control. Electronics is distinct from electrical science and technology, which deals with the generation, distribution, control and application of electrical power. This distinction started around 1906 with the invention by Lee De Forest of the triode, which made electrical amplification possible with a non-mechanical device. Until 1950 this field was called "radio technology" because its principal application was the design and theory of radio transmitters, receivers and vacuum tubes.
Most electronic devices today use semiconductor components to perform electron control. The study of semiconductor devices and related technology is considered a branch of physics, whereas the design and construction of electronic circuits to solve practical problems come under electronics engineering. This article focuses on engineering aspects of electronics.


Electronic devices and components

An electronic component is any physical entity in an electronic system used to affect the electrons or their associated fields in a desired manner consistent with the intended function of the electronic system. Components are generally intended to be connected together, usually by being soldered to a printed circuit board (PCB), to create an electronic circuit with a particular function (for example an amplifier, radio receiver, or oscillator). Components may be packaged singly or in more complex groups as integrated circuits. Some common electronic components are capacitors, inductors, resistors, diodes, transistors, etc. Components are often categorized as active (e.g. transistors and thyristors) or passive (e.g. resistors and capacitors).

Types of circuits

Circuits and components can be divided into two groups: analog and digital. A particular device may consist of circuitry that has one or the other or a mix of the two types.

Analog circuits
Main article: Analog electronics


Hitachi J100 adjustable frequency drive chassis.
Most analog electronic appliances, such as radio receivers, are constructed from combinations of a few types of basic circuits. Analog circuits use a continuous range of voltage as opposed to discrete levels as in digital circuits.
The number of different analog circuits so far devised is huge, especially because a 'circuit' can be defined as anything from a single component, to systems containing thousands of components.
Analog circuits are sometimes called linear circuits although many non-linear effects are used in analog circuits such as mixers, modulators, etc. Good examples of analog circuits include vacuum tube and transistor amplifiers, operational amplifiers and oscillators.
One rarely finds modern circuits that are entirely analog. These days analog circuitry may use digital or even microprocessor techniques to improve performance. This type of circuit is usually called "mixed signal" rather than analog or digital.
Sometimes it may be difficult to differentiate between analog and digital circuits as they have elements of both linear and non-linear operation. An example is the comparator which takes in a continuous range of voltage but only outputs one of two levels as in a digital circuit. Similarly, an overdriven transistor amplifier can take on the characteristics of a controlled switch having essentially two levels of output.

Digital circuits
Main article: Digital electronics
Digital circuits are electric circuits based on a number of discrete voltage levels. Digital circuits are the most common physical representation of Boolean algebra and are the basis of all digital computers. To most engineers, the terms "digital circuit", "digital system" and "logic" are interchangeable in the context of digital circuits. Most digital circuits use two voltage levels labeled "Low"(0) and "High"(1). Often "Low" will be near zero volts and "High" will be at a higher level depending on the supply voltage in use. Ternary (with three states) logic has been studied, and some prototype computers made.
Computers, electronic clocks, and programmable logic controllers (used to control industrial processes) are constructed of digital circuits. Digital Signal Processors are another example.
Building-blocks:
Logic gates
Adders
Binary Multipliers
Flip-Flops
Counters
Registers
Multiplexers
Schmitt triggers
Highly integrated devices:
Microprocessors
Microcontrollers
Application-specific integrated circuit (ASIC)
Digital signal processor (DSP)
Field-programmable gate array (FPGA)

Heat dissipation and thermal management

Main article: Thermal management of electronic devices and systems
Heat generated by electronic circuitry must be dissipated to prevent immediate failure and improve long term reliability. Techniques for heat dissipation can include heat sinks and fans for air cooling, and other forms of computer cooling such as water cooling. These techniques use convection, conduction, & radiation of heat energy.

Noise

Main article: Electronic noise
Noise is associated with all electronic circuits. Noise is defined as unwanted disturbances superposed on a useful signal that tend to obscure its information content. Noise is not the same as signal distortion caused by a circuit. Noise may be electromagnetically or thermally generated, which can be decreased by lowering the operating temperature of the circuit. Other types of noise, such as shot noise cannot be removed as they are due to limitations in physical properties.

Electronics theory

Main article: Mathematical methods in electronics
Mathematical methods are integral to the study of electronics. To become proficient in electronics it is also necessary to become proficient in the mathematics of circuit analysis.
Circuit analysis is the study of methods of solving generally linear systems for unknown variables such as the voltage at a certain node or the current through a certain branch of a network. A common analytical tool for this is the SPICE circuit simulator.
Also important to electronics is the study and understanding of electromagnetic field theory.

Computer aided design (CAD)

Main article: Electronic design automation
Today's electronics engineers have the ability to design circuits using premanufactured building blocks such as power supplies, semiconductors (such as transistors), and integrated circuits. Electronic design automation software programs include schematic capture programs and printed circuit board design programs. Popular names in the EDA software world are NI Multisim, Cadence (ORCAD), Eagle PCB and Schematic, Mentor (PADS PCB and LOGIC Schematic), Altium (Protel), LabCentre Electronics (Proteus), gEDA, KiCad and many others.

Construction methods

Main article: Electronic packaging
Many different methods of connecting components have been used over the years. For instance, early electronics often used point to point wiring with components attached to wooden breadboards to construct circuits. Cordwood construction and wire wraps were other methods used. Most modern day electronics now use printed circuit boards made of materials such as FR4, or the cheaper (and less hard-wearing) Synthetic Resin Bonded Paper (SRBP, also known as Paxoline/Paxolin (trade marks) and FR2) - characterised by its light yellow-to-brown colour. Health and environmental concerns associated with electronics assembly have gained increased attention in recent years, especially for products destined to the European Union, with its Restriction of Hazardous Substances Directive (RoHS) and Waste Electrical and Electronic Equipment Directive (WEEE), which went into force in July 2006.

See also

Electronics
Audio engineering
Broadcast engineering
Computer engineering
Electronic engineering
Index of electronics articles
Marine electronics
Power electronics
Robotics


(source:wikipedia)

Friday, December 3

Data

The term data refers to qualitative or quantitative attributes of a variable or set of variables. Data (plural of "datum") are typically the results of measurements and can be the basis of graphs, images, or observations of a set of variables. Data are often viewed as the lowest level of abstraction from which information and then knowledge are derived. Raw data refers to a collection of numbers, characters, images or other outputs from devices that collect information to convert physical quantities into symbols, that are unprocessed.

Read More : Data Transmission

Etymology

The word data (pronounced /ˈdeɪtə/ DAY-tə, /ˈdætə/ DA-tə, or /ˈdɑːtə/ DAH-tə) is the Latin plural of datum, neuter past participle of dare, "to give", hence "something given". In discussions of problems in geometry, mathematics, engineering, and so on, the terms givens and data are used interchangeably. Also, data is a representation of a fact, figure, and idea. Such usage is the origin of data as a concept in computer science: data are numbers, words, images, etc., accepted as they stand.

Usage in English


This article contains weasel words, vague phrasing that often accompanies biased or unverifiable information. Such statements should be clarified or removed. (July 2010)
In English, the word datum is still used in the general sense of "something given". In cartography, geography, nuclear magnetic resonance and technical drawing it is often used to refer to a single specific reference datum from which distances to all other data are measured. Any measurement or result is a datum, but data point is more common, albeit tautological. Both datums (see usage in datum article) and the originally Latin plural data are used as the plural of datum in English, but data is more commonly treated as a mass noun and used with a verb in the singular form especially in, day-to-day usage. For example, This is all the data from the experiment. This usage is inconsistent with the rules of Latin grammar and traditional English (These are all the data from the experiment). Even when a very small quantity of data is referred to (One number, for example) the phrase piece of data is often used, as opposed to datum.
Many style guides and international organizations, such as the IEEE Computer Society, allow usage of data as either a mass noun or plural based on author preference. Other professional organizations and style guides require that authors treat data as a plural noun. For example, the Air Force Flight Test Center specifically states that the word data is always plural, never singular.
Data is accepted and often treated as a singular mass noun in everyday educated usage. Some major newspapers such as The New York Times use it alternately in the singular or plural. In the New York Times the phrases "the survey data are still being analyzed" and "the first year for which data is available" have appeared on the same day. In scientific writing data is often treated as a plural, as in These data do not support the conclusions, but many people now think of data as a singular mass entity like information and use the singular in general usage. British usage now widely accepts treating data as singular in standard English. including everyday newspaper usage at least in non-scientific use. UK scientific publishing still prefers treating it as a plural. Some UK university style guides recommend using data for both singular and plural use and some recommend treating it only as a singular in connection with computers.

Meaning of data, information and knowledge

The terms information and knowledge are frequently used for overlapping concepts. The main difference is in the level of abstraction being considered. Data is the lowest level of abstraction, information is the next level, and finally, knowledge is the highest level among all three.[citation needed] Data on its own carries no meaning. In order for data to become information, it must be interpreted and take on a meaning. For example, the height of Mt. Everest is generally considered as "data", a book on Mt. Everest geological characteristics may be considered as "information", and a report containing practical information on the best way to reach Mt. Everest's peak may be considered as "knowledge".


Information as a concept bears a diversity of meanings, from everyday usage to technical settings. Generally speaking, the concept of information is closely related to notions of constraint, communication, control, data, form, instruction, knowledge, meaning, mental stimulus, pattern, perception, and representation.
Beynon-Davies uses the concept of a sign to distinguish between data and information; data are symbols while information occurs when symbols are used to refer to something. 
It is people and computers who collect data and impose patterns on it. These patterns are seen as information which can be used to enhance knowledge. These patterns can be interpreted as truth, and are authorized as aesthetic and ethical criteria. Events that leave behind perceivable physical or virtual remains can be traced back through data. Marks are no longer considered data once the link between the mark and observation is broken. 
Raw data refers to a collection of numbers, characters, images or other outputs from devices to convert physical quantities into symbols, that are unprocessed. Such data is typically further processed by a human or input into a computer, stored and processed there, or transmitted (output) to another human or computer (possibly through a data cable). Raw data is a relative term; data processing commonly occurs by stages, and the "processed data" from one stage may be considered the "raw data" of the next.
Mechanical computing devices are classified according to the means by which they represent data. An analog computer represents a datum as a voltage, distance, position, or other physical quantity. A digital computer represents a datum as a sequence of symbols drawn from a fixed alphabet. The most common digital computers use a binary alphabet, that is, an alphabet of two characters, typically denoted "0" and "1". More familiar representations, such as numbers or letters, are then constructed from the binary alphabet.
Some special forms of data are distinguished. A computer program is a collection of data, which can be interpreted as instructions. Most computer languages make a distinction between programs and the other data on which programs operate, but in some languages, notably Lisp and similar languages, programs are essentially indistinguishable from other data. It is also useful to distinguish metadata, that is, a description of other data. A similar yet earlier term for metadata is "ancillary data." The prototypical example of metadata is the library catalog, which is a description of the contents of books.
Experimental data refers to data generated within the context of a scientific investigation by observation and recording. Field data refers to raw data collected in an uncontrolled in situ environment.



(source:wikipedia)

Computer software

Computer software, or just software, is the collection of computer programs and related data that provide the instructions telling a computer what to do. We can also say software refers to one or more computer programs and data held in the storage of the computer for some purposes. Program software performs the function of the program it implements, either by directly providing instructions to the computer hardware or by serving as input to another piece of software. The term was coined to contrast to the old term hardware (meaning physical devices). In contrast to hardware, software is intangible, meaning it "cannot be touched". Software is also sometimes used in a more narrow sense, meaning application software only. Sometimes the term includes data that has not traditionally been associated with computers, such as film, tapes, and records.
Examples of computer software include:
Application software includes end-user applications of computers such as word processors or video games, and ERP software for groups of users.
Middleware controls and co-ordinates distributed systems.
Programming languages define the syntax and semantics of computer programs. For example, many mature banking applications were written in the COBOL language, originally invented in 1959. Newer applications are often written in more modern programming languages.
System software includes operating systems, which govern computing resources. Today large applications running on remote machines such as Websites are considered to be system software, because the end-user interface is generally through a graphical user interface, such as a web browser.
Testware is software for testing hardware or a software package.
Firmware is low-level software often stored on electrically programmable memory devices. Firmware is given its name because it is treated like hardware and run ("executed") by other software programs.
Shrinkware is the older name given to consumer-purchased software, because it was often sold in retail stores in a shrink-wrapped box.
Device drivers control parts of computers such as disk drives, printers, CD drives, or computer monitors.
Programming tools help conduct computing tasks in any category listed above. For programmers, these could be tools for debugging or reverse engineering older legacy systems in order to check source code compatibility.


History

For the history prior to 1946, Read More: History of computing hardware.
The first theory about software was proposed by Alan Turing in his 1935 essay Computable numbers with an application to the Entscheidungsproblem (Decision problem). The term "software" was first used in print by John W. Tukey in 1958. Colloquially, the term is often used to mean application software. In computer science and software engineering, software is all information processed by computer system, programs and data. The academic fields studying software are computer science and software engineering.
The history of computer software is most often traced back to the first software bug in 1946. As more and more programs enter the realm of firmware, and the hardware itself becomes smaller, cheaper and faster due to Moore's law, elements of computing first considered to be software, join the ranks of hardware. Most hardware companies today have more software programmers on the payroll than hardware designers, since software tools have automated many tasks of Printed circuit board engineers. Just like the Auto industry, the Software industry has grown from a few visionaries operating out of their garage with prototypes. Steve Jobs and Bill Gates were the Henry Ford and Louis Chevrolet of their times, who capitalized on ideas already commonly known before they started in the business. In the case of Software development, this moment is generally agreed to be the publication in the 1980s of the specifications for the IBM Personal Computer published by IBM employee Philip Don Estridge. Today his move would be seen as a type of crowd-sourcing.


Until that time, software was bundled with the hardware by Original equipment manufacturers (OEMs) such as Data General, Digital Equipment and IBM. When a customer bought a minicomputer, at that time the smallest computer on the market, the computer did not come with Pre-installed software, but needed to be installed by engineers employed by the OEM. Computer hardware companies not only bundled their software, they also placed demands on the location of the hardware in a refrigerated space called a computer room. Most companies had their software on the books for 0 dollars, unable to claim it as an asset (this is similar to financing of popular music in those days). When Data General introduced the Data General Nova, a company called Digidyne wanted to use its RDOS operating system on its own hardware clone. Data General refused to license their software (which was hard to do, since it was on the books as a free asset), and claimed their "bundling rights". The Supreme Court set a precedent called Digidyne v. Data General in 1985. The Supreme Court let a 9th circuit decision stand, and Data General was eventually forced into licensing the Operating System software because it was ruled that restricting the license to only DG hardware was an illegal tying arrangement. Soon after, IBM 'published' its DOS source for free, and Microsoft was born. Unable to sustain the loss from lawyer's fees, Data General ended up being taken over by EMC Corporation. The Supreme Court decision made it possible to value software, and also purchase Software patents. The move by IBM was almost a protest at the time. Few in the industry believed that anyone would profit from it other than IBM (through free publicity). Microsoft and Apple were able to thus cash in on 'soft' products. It is hard to imagine today that people once felt that software was worthless without a machine. There are many successful companies today that sell only software products, though there are still many common software licensing problems due to the complexity of designs and poor documentation, leading to patent trolls.
With open software specifications and the possibility of software licensing, new opportunities arose for software tools that then became the de facto standard, such as DOS for operating systems, but also various proprietary word processing and spreadsheet programs. In a similar growth pattern, proprietary development methods became standard Software development methodology.

Overview
A layer structure showing where operating system is located on generally used software systems on desktops
Software includes all the various forms and roles that digitally stored data may have and play in a computer (or similar system), regardless of whether the data is used as code for a CPU, or other interpreter, or whether it represents other kinds of information. Software thus encompasses a wide array of products that may be developed using different techniques such as ordinary programming languages, scripting languages, microcode, or an FPGA configuration.
The types of software include web pages developed in languages and frameworks like HTML, PHP, Perl, JSP, ASP.NET, XML, and desktop applications like OpenOffice.org, Microsoft Word developed in languages like C, C++, Java, C#, or Smalltalk. Application software usually runs on an underlying software operating systems such as Linux or Microsoft Windows. Software (or firmware) is also used in video games and for the configurable parts of the logic systems of automobiles, televisions, and other consumer electronics.
Computer software is so called to distinguish it from computer hardware, which encompasses the physical interconnections and devices required to store and execute (or run) the software. At the lowest level, executable code consists of machine language instructions specific to an individual processor. A machine language consists of groups of binary values signifying processor instructions that change the state of the computer from its preceding state. Programs are an ordered sequence of instructions for changing the state of the computer in a particular sequence. It is usually written in high-level programming languages that are easier and more efficient for humans to use (closer to natural language) than machine language. High-level languages are compiled or interpreted into machine language object code. Software may also be written in an assembly language, essentially, a mnemonic representation of a machine language using a natural language alphabet. Assembly language must be assembled into object code via an assembler.

Types of software

Practical computer systems divide software systems into three major classes: system software, programming software and application software, although the distinction is arbitrary, and often blurred.

System software
System software provides the basic functions for computer usage and helps run the computer hardware and system. It includes a combination of the following:
device drivers
operating systems
servers
utilities
window systems
System software is responsible for managing a variety of independent hardware components, so that they can work together harmoniously. Its purpose is to unburden the application software programmer from the often complex details of the particular computer being used, including such accessories as communications devices, printers, device readers, displays and keyboards, and also to partition the computer's resources such as memory and processor time in a safe and stable manner.

Programming software
Programming software usually provides tools to assist a programmer in writing computer programs, and software using different programming languages in a more convenient way. The tools include:
compilers
debuggers
interpreters
linkers
text editors
An Integrated development environment (IDE) is a single application that attempts to manage all these functions.

Application software
System software does not aim at a certain application fields.In contrast,different application software offers different function based on users and the area it served.Application software is developed for some certain purpose,which either can be a certain program or a collection of some programmes,such as a graphic browser or the data base management system. Application software allows end users to accomplish one or more specific (not directly computer development related) tasks. Typical applications include:
industrial automation
business software
video games
quantum chemistry and solid state physics software
telecommunications (i.e., the Internet and everything that flows on it)
databases
educational software
Mathematical software
medical software
molecular modeling software
image editing
spreadsheet
simulation software
Word processing
Decision making software
Application software exists for and has impacted a wide variety of topics.

Software topics

Software architecture
Users often see things differently than programmers. People who use modern general purpose computers (as opposed to embedded systems, analog computers and supercomputers) usually see three layers of software performing a variety of tasks: platform, application, and user software.
Platform software: Platform includes the firmware, device drivers, an operating system, and typically a graphical user interface which, in total, allow a user to interact with the computer and its peripherals (associated equipment). Platform software often comes bundled with the computer. On a PC you will usually have the ability to change the platform software.
Application software: Application software or Applications are what most people think of when they think of software. Typical examples include office suites and video games. Application software is often purchased separately from computer hardware. Sometimes applications are bundled with the computer, but that does not change the fact that they run as independent applications. Applications are usually independent programs from the operating system, though they are often tailored for specific platforms. Most users think of compilers, databases, and other "system software" as applications.
User-written software: End-user development tailors systems to meet users' specific needs. User software include spreadsheet templates and word processor templates. Even email filters are a kind of user software. Users create this software themselves and often overlook how important it is. Depending on how competently the user-written software has been integrated into default application packages, many users may not be aware of the distinction between the original packages, and what has been added by co-workers.

Software documentation
Most software has software documentation so that the end user can understand the program, what it does, and how to use it. Without clear documentation, software can be hard to use—especially if it is very specialized and relatively complex like Photoshop or AutoCAD.
Developer documentation may also exist, either with the code as comments and/or as separate files, detailing how the programs works and can be modified.

Software library
An executable is almost always not sufficiently complete for direct execution. Software libraries include collections of functions and functionality that may be embedded in other applications. Operating systems include many standard Software libraries, and applications are often distributed with their own libraries.

Software standard
Since software can be designed using many different programming languages and in many different operating systems and operating environments, software standard is needed so that different software can understand and exchange information between each other. For instance, an email sent from a Microsoft Outlook should be readable from Yahoo! Mail and vice versa.

Execution (computing)
Computer software has to be "loaded" into the computer's storage (such as the hard drive or memory). Once the software has loaded, the computer is able to execute the software. This involves passing instructions from the application software, through the system software, to the hardware which ultimately receives the instruction as machine code. Each instruction causes the computer to carry out an operation – moving data, carrying out a computation, or altering the control flow of instructions.
Data movement is typically from one place in memory to another. Sometimes it involves moving data between memory and registers which enable high-speed data access in the CPU. Moving data, especially large amounts of it, can be costly. So, this is sometimes avoided by using "pointers" to data instead. Computations include simple operations such as incrementing the value of a variable data element. More complex computations may involve many operations and data elements together.

Software quality, Software testing, and Software reliability
Software quality is very important, especially for commercial and system software like Microsoft Office, Microsoft Windows and Linux. If software is faulty (buggy), it can delete a person's work, crash the computer and do other unexpected things. Faults and errors are called "bugs." Many bugs are discovered and eliminated (debugged) through software testing. However, software testing rarely – if ever – eliminates every bug; some programmers say that "every program has at least one more bug" (Lubarsky's Law). All major software companies, such as Microsoft, Novell and Sun Microsystems, have their own software testing departments with the specific goal of just testing. Software can be tested through unit testing, regression testing and other methods, which are done manually, or most commonly, automatically, since the amount of code to be tested can be quite large. For instance, NASA has extremely rigorous software testing procedures for many operating systems and communication functions. Many NASA based operations interact and identify each other through command programs called software. This enables many people who work at NASA to check and evaluate functional systems overall. Programs containing command software enable hardware engineering and system operations to function much easier together.

License
Main article: Software license
The software's license gives the user the right to use the software in the licensed environment. Some software comes with the license when purchased off the shelf, or an OEM license when bundled with hardware. Other software comes with a free software license, granting the recipient the rights to modify and redistribute the software. Software can also be in the form of freeware or shareware.

Patents
Main articles: Software patent and Software patent debate
Software can be patented; however, software patents can be controversial in the software industry with many people holding different views about it. The controversy over software patents is that a specific algorithm or technique that the software has may not be duplicated by others and is considered an intellectual property and copyright infringement depending on the severity.

Design and implementation

Design and implementation of software varies depending on the complexity of the software. For instance, design and creation of Microsoft Word software will take much more time than designing and developing Microsoft Notepad because of the difference in functionalities in each one.
Software is usually designed and created (coded/written/programmed) in integrated development environments (IDE) like Eclipse, Emacs and Microsoft Visual Studio that can simplify the process and compile the program. As noted in different section, software is usually created on top of existing software and the application programming interface (API) that the underlying software provides like GTK+, JavaBeans or Swing. Libraries (APIs) are categorized for different purposes. For instance, JavaBeans library is used for designing enterprise applications, Windows Forms library is used for designing graphical user interface (GUI) applications like Microsoft Word, and Windows Communication Foundation is used for designing web services. Underlying computer programming concepts like quicksort, hashtable, array, and binary tree can be useful to creating software. When a program is designed, it relies on the API. For instance, if a user is designing a Microsoft Windows desktop application, he/she might use the .NET Windows Forms library to design the desktop application and call its APIs like Form1.Close() and Form1.Show() to close or open the application and write the additional operations him/herself that it need to have. Without these APIs, the programmer needs to write these APIs him/herself. Companies like Sun Microsystems, Novell, and Microsoft provide their own APIs so that many applications are written using their software libraries that usually have numerous APIs in them.
Computer software has special economic characteristics that make its design, creation, and distribution different from most other economic goods. A person who creates software is called a programmer, software engineer, software developer, or code monkey, terms that all have a similar meaning.

Industry and organizations
A great variety of software companies and programmers in the world comprise a software industry . Software can be quite a profitable industry: Bill Gates, the founder of Microsoft was the richest person in the world in 2009 largely by selling the Microsoft Windows and Microsoft Office software products. The same goes for Larry Ellison, largely through his Oracle database software. Through time the software industry has become increasingly specialized.
Non-profit software organizations include the Free Software Foundation, GNU Project and Mozilla Foundation. Software standard organizations like the W3C, IETF develop software standards so that most software can interoperate through standards such as XML, HTML, HTTP or FTP.
Other well-known large software companies include Novell, SAP, Symantec, Adobe Systems, and Corel, while small companies often provide innovation.

(source:wikipedia)