Programming languages

This article discusses the major developments in the history of programming languages. For a detailed timeline of events, see the timeline of programming languages.
The first programming languages predate the modern computer. At first, the languages were codes.
During a nine-month period in 1842-1843, Ada Lovelace translated Italian mathematician Luigi Menabrea's memoir on Charles Babbage's newest proposed machine, the Analytical Engine. With the article, she appended a set of notes which specified in complete detail a method for calculating Bernoulli numbers with the Engine, recognized by some historians as the world's first computer program. But some biographers debate the extent of her original contributions versus those of her husband.[citation needed]
The Jacquard loom, invented in 1801, used holes in punched cards to represent sewing loom arm movements in order to generate decorative patterns automatically.
Herman Hollerith realized that he could encode information on punch cards when he observed that train conductors would encode the appearance of the ticket holders on the train tickets using the position of punched holes on the tickets. Hollerith then proceeded to encode the 1890 census data on punch cards.
The first computer codes were specialized for the applications. In the first decades of the twentieth century, numerical calculations were based on decimal numbers. Eventually it was realized that logic could be represented with numbers, as well as with words. For example, Alonzo Church was able to express the lambda calculus in a formulaic way. The Turing machine was an abstraction of the operation of a tape-marking machine, for example, in use at the telephone companies. However, unlike the lambda calculus, Turing's code does not serve well as a basis for higher-level languages — its principal use is in rigorous analyses of algorithmic complexity.
Like many "firsts" in history, the first modern programming language is hard to identify. From the start, the restrictions of the hardware defined the language. Punch cards allowed 80 columns, but some of the columns had to be used for a sorting number on each card. Fortran included some keywords which were the same as English words, such as "IF", "GOTO" (go to) and "CONTINUE". The use of a magnetic drum for memory meant that computer programs also had to be interleaved with the rotations of the drum. Thus the programs were more hardware dependent than today.
To some people the answer depends on how much power and human-readability is required before the status of "programming language" is granted. Jacquard looms and Charles Babbage's Difference Engine both had simple, extremely limited languages for describing the actions that these machines should perform. One can even regard the punch holes on a player piano scroll as a limited domain-specific language, albeit not designed for human consumption.

OPERATING SYSTEM Background

Early computers lacked any form of operating system. The user had sole use of the machine and would arrive armed with program and data, often on punched paper and tape. The program would be loaded into the machine, and the machine would be set to work until the program completed or crashed. Programs could generally be debugged via a front panel using switches and lights. It is said that Alan Turing was a master of this on the early Manchester Mark 1 machine, and he was already deriving the primitive conception of an operating system from the principles of the Universal Turing machine.
Later machines came with libraries of support code, which would be linked to the user's program to assist in operations such as input and output. This was the genesis of the modern-day operating system. However, machines still ran a single job at a time; at Cambridge University in England the job queue was at one time a washing line from which tapes were hung with different colored clothes-pegs to indicate job-priority.
As machines became more powerful, the time to run programs diminished and the time to hand off the equipment became very large by comparison. Accounting for and paying for machine usage moved on from checking the wall clock to automatic logging by the computer. Run queues evolved from a literal queue of people at the door, to a heap of media on a jobs-waiting table, or batches of punch-cards stacked one on top of the other in the reader, until the machine itself was able to select and sequence which magnetic tape drives were online. Where program developers had originally had access to run their own jobs on the machine, they were supplanted by dedicated machine operators who looked after the well-being and maintenance of the machine and were less and less concerned with implementing tasks manually. When commercially available computer centers were faced with the implications of data lost through tampering or operational errors, equipment vendors were put under pressure to enhance the runtime libraries to prevent misuse of system resources. Automated monitoring was needed not just for CPU usage but for counting pages printed, cards punched, cards read, disk storage used and for signaling when operator intervention was required by jobs such as changing magnetic tapes.
All these features were building up towards the repertoire of a fully capable operating system. Eventually the runtime libraries became an amalgamated program that was started before the first customer job and could read in the customer job, control its execution, clean up after it, record its usage, and immediately go on to process the next job. Significantly, it became possible for programmers to use symbolic program-code instead of having to hand-encode binary images, once task-switching allowed a computer to perform translation of a program into binary form before running it. These resident background programs, capable of managing multistep processes, were often called monitors or monitor-programs before the term OS established itself.
An underlying program offering basic hardware-management, software-scheduling and resource-monitoring may seem a remote ancestor to the user-oriented OSes of the personal computing era. But there has been a shift in meaning. With the era of commercial computing, more and more "secondary" software was bundled in the OS package, leading eventually to the perception of an OS as a complete user-system with utilities, applications (such as text editors and file managers) and configuration tools, and having an integrated graphical user interface. The true descendant of the early operating systems is what is now called the "kernel". In technical and development circles the old restricted sense of an OS persists because of the continued active development of embedded operating systems for all kinds of devices with a data-processing component, from hand-held gadgets up to industrial robots and real-time control-systems, which do not run user-applications at the front-end. An embedded OS in a device today is not so far removed as one might think from its ancestor of the 1950s.
The broader categories of systems and application software are discussed in the computer software article.

operating systems

The history of computer operating systems recapitulates to a degree the recent history of computer hardware.
Operating systems (OSes) provide a set of functions needed and used by most application-programs on a computer, and the necessary linkages for the control and synchronization of the computer's hardware. On the first computers, without an operating system, every program needed the full hardware specification to run correctly and perform standard tasks, and its own drivers for peripheral devices like printers and card-readers. The growing complexity of hardware and application-programs eventually made operating systems a necessity.

Web 2.0

Beginning in 2002, new ideas for sharing and exchanging content ad hoc, such as Weblogs and RSS, rapidly gained acceptance on the Web. This new model for information exchange, primarily featuring DIY user-edited and generated websites, was coined Web 2.0.
The Web 2.0 boom saw many new service-oriented startups catering to a new, democratized Web. Some believe it will be followed by the full realization of a Semantic Web.
Tim Berners-Lee originally expressed the vision of the Semantic Web as follows:[7]
I have a dream for the Web [in which computers] become capable of analyzing all the data on the Web – the content, links, and transactions between people and computers. A ‘Semantic Web’, which should make this possible, has yet to emerge, but when it does, the day-to-day mechanisms of trade, bureaucracy and our daily lives will be handled by machines talking to machines. The ‘intelligent agents’ people have touted for ages will finally materialize.
– Tim Berners-Lee, 1999
Predictably, as the World Wide Web became easier to query, attained a higher degree of usability, and shed its esoteric reputation, it gained a sense of organization and unsophistication which opened the floodgates and ushered in a rapid period of popularization. New sites such as Wikipedia and its sister projects proved revolutionary in executing the User edited content concept. In 2005, 3 ex-PayPal employees formed a video viewing website called YouTube. Only a year later, YouTube was proven the most quickly popularized website in history, and even started a new concept of user-submitted content in major events, as in the CNN-YouTube Presidential Debates.
The popularity of YouTube and similar services, combined with the increasing availability and affordability of high-speed connections has made video content far more common on all kinds of website. Many video-content hosting and creation sites provide an easy means for their videos to be embedded on third party websites without payment or permission.
This combination of more user-created or edited content, and easy means of sharing content, such as via RSS widgets and video embedding, has led to many sites with a typical "Web 2.0" feel. They have articles with embedded video, user-submitted comments below the article, and RSS boxes to the side, listing some of the latest articles from other sites.
Continued extension of the World Wide Web has focused on connecting devices to the Internet, coined Intelligent Device Management. As Internet connectivity becomes ubiquitous, manufacturers have started to leverage the expanded computing power of their devices to enhance their usability and capability. Through Internet connectivity, manufacturers are now able to interact with the devices they have sold and shipped to their customers, and customers are able to interact with the manufacturer (and other providers) to access new content.

The Web becomes ubiquitous

In the aftermath of the dot-com bubble, telecommunications companies had a great deal of overcapacity as many Internet business clients went bust. That, plus ongoing investment in local cell infrastructure kept connectivity charges low, and helping to make high-speed Internet connectivity more affordable. During this time, a handful of companies found success developing business models that helped make the World Wide Web a more compelling experience. These include airline booking sites, Google's search engine and its profitable approach to simplified, keyword-based advertising, as well as Ebay's do-it-yourself auction site and Amazon.com's online department store.
This new era also begot social networking websites, such as MySpace and Facebook, which, though unpopular at first, very rapidly gained acceptance in becoming a major part of youth culture.

"Dot-com" boom and bust

The low interest rates in 1998–99 helped increase the start-up capital amounts. Although a number of these new entrepreneurs had realistic plans and administrative ability, most of them lacked these characteristics but were able to sell their ideas to investors because of the novelty of the dot-com concept.
Historically, the dot-com boom can be seen as similar to a number of other technology-inspired booms of the past including railroads in the 1840s, radio in the 1920s, transistor electronics in the 1950s, computer time-sharing in the 1960s, and home computers and biotechnology in the early 1980s.
In 2001 the bubble burst, and many dot-com startups went out of business after burning through their venture capital and failing to become profitable.

Commercialization of the WWW

By 1996 it became obvious to most publicly traded companies that a public Web presence was no longer optional.[citation needed] Though at first people saw mainly[citation needed] the possibilities of free publishing and instant worldwide information, increasing familiarity with two-way communication over the "Web" led to the possibility of direct Web-based commerce (e-commerce) and instantaneous group communications worldwide. More dotcoms, displaying products on hypertext webpages, were added into the Web.

Web organization

In May 1994 the first International WWW Conference, organized by Robert Cailliau, was held at CERN; the conference has been held every year since. In April 1993 CERN had agreed that anyone could use the Web protocol and code royalty-free; this was in part a reaction to the perturbation caused by the University of Minnesota announcing that it would begin charging license fees for its implementation of the Gopher protocol.
In September 1994, Berners-Lee founded the World Wide Web Consortium (W3C) at the Massachusetts Institute of Technology with support from the Defense Advanced Research Projects Agency (DARPA) and the European Commission. It comprised various companies that were willing to create standards and recommendations to improve the quality of the Web. Berners-Lee made the Web available freely, with no patent and no royalties due. The World Wide Web Consortium decided that their standards must be based on royalty-free technology, so they can be easily adopted by anyone.
By the end of 1994, while the total number of websites was still minute compared to present standards, quite a number of notable websites were already active, many of whom are the precursors or inspiring examples of today's most popular services.

Early browsers

The turning point for the World Wide Web was the introduction of the Mosaic web browser in 1993, a graphical browser developed by a team at the National Center for Supercomputing Applications (NCSA) at the University of Illinois at Urbana-Champaign (UIUC), led by Marc Andreessen. Funding for Mosaic came from the High-Performance Computing and Communications Initiative, a funding program initiated by then-Senator Al Gore's High Performance Computing and Communication Act of 1991 also known as the Gore Bill.[6]
The origins of Mosaic had begun in 1992. In November 1992, the NCSA at the University of Illinois (UIUC) established a website. In December 1992, Andreessen and Eric Bina, students attending UIUC and working at the NCSA, began work on Mosaic. They released an X Window browser in February 1993. It gained popularity due to its strong support of integrated multimedia, and the authors’ rapid response to user bug reports and recommendations for new features.
The first Microsoft Windows browser was Cello, written by Thomas R. Bruce for the Legal Information Institute at Cornell Law School to provide legal information, since more lawyers had more access to Windows than to Unix. Cello was released in June 1993.
After graduation from UIUC, Andreessen and James H. Clark, former CEO of Silicon Graphics, met and formed Mosaic Communications Corporation to develop the Mosaic browser commercially. The company changed its name to Netscape in April 1994, and the browser was developed further as Netscape Navigator.

Development of the World Wide Web

The NeXTcube used by Tim Berners-Lee at CERN became the first Web server.
In 1980, the Englishman Tim Berners-Lee, an independent contractor at the European Organization for Nuclear Research (CERN), Switzerland, built ENQUIRE, as a personal database of people and software models, but also as a way to play with hypertext; each new page of information in ENQUIRE had to be linked to an existing page.
In 1984 Berners-Lee returned to CERN, and considered its problems of information presentation: physicists from around the world needed to share data, with no common machines and no common presentation software. He wrote a proposal in March 1989 for "a large hypertext database with typed links", but it generated little interest. His boss, Mike Sendall, encouraged Berners-Lee to begin implementing his system on a newly acquired NeXT workstation. He considered several names, including Information Mesh, The Information Mine (turned down as it abbreviates to TIM, the WWW's creator's name) or Mine of Information (turned down because it abbreviates to MOI which is "Me" in French), but settled on World Wide Web.

He found an enthusiastic collaborator in Robert Cailliau, who rewrote the proposal (published on November 12, 1990) and sought resources within CERN. Berners-Lee and Cailliau pitched their ideas to the European Conference on Hypertext Technology in September 1990, but found no vendors who could appreciate their vision of marrying hypertext with the Internet.
By Christmas 1990, Berners-Lee had built all the tools necessary for a working Web: the HyperText Transfer Protocol (HTTP) 0.9, the HyperText Markup Language (HTML), the first Web browser (named WorldWideWeb, which was also a Web editor), the first HTTP server software (later known as CERN httpd), the first web server (http://info.cern.ch), and the first Web pages that described the project itself. The browser could access Usenet newsgroups and FTP files as well. However, it could run only on the NeXT; Nicola Pellow therefore created a simple text browser that could run on almost any computer. To encourage use within CERN, they put the CERN telephone directory on the web — previously users had had to log onto the mainframe in order to look up phone numbers.
Paul Kunz from the Stanford Linear Accelerator Center visited CERN in May 1991, and was captivated by the Web. He brought the NeXT software back to SLAC, where librarian Louise Addis adapted it for the VM/CMS operating system on the IBM mainframe as a way to display SLAC’s catalog of online documents; this was the first web server outside of Europe and the first in North America.
On August 6, 1991, Berners-Lee posted a short summary of the World Wide Web project on the alt.hypertext newsgroup. This date also marked the debut of the Web as a publicly available service on the Internet.
The WorldWideWeb (WWW) project aims to allow all links to be made to any information anywhere. [...] The WWW project was started to allow high energy physicists to share data, news, and documentation. We are very interested in spreading the web to other areas, and having gateway servers for other data. Collaborators welcome!" —from Tim Berners-Lee's first message
An early CERN-related contribution to the Web was the parody band Les Horribles Cernettes, whose promotional image is believed to be among the Web's first five pictures.

[edit] 1992–1995: Growth of the WWW
In keeping with its birth at CERN, early adopters of the World Wide Web were primarily university-based scientific departments or physics laboratories such as Fermilab and SLAC.
Early websites intermingled links for both the HTTP web protocol and the then-popular Gopher protocol, which provided access to content through hypertext menus presented as a file system rather than through HTML files. Early Web users would navigate either by bookmarking popular directory pages, such as Berners-Lee's first site at http://info.cern.ch/, or by consulting updated lists such as the NCSA "What's New" page. Some sites were also indexed by WAIS, enabling users to submit full-text searches similar to the capability later provided by search engines.
There was still no graphical browser available for computers besides the NeXT. This gap was filled in April 1992 with the release of Erwise, an application developed at Helsinki University of Technology, and in May by ViolaWWW, created by Pei-Yuan Wei, which included advanced features such as embedded graphics, scripting, and animation. Both programs ran on the X Window System for Unix.
Students at the University of Kansas adapted an existing text-only hypertext browser, Lynx, to access the web. Lynx was available on Unix and DOS, and some web designers, unimpressed with glossy graphical websites, held that a website not accessible through Lynx wasn’t worth visiting.

World Wide Web

The World Wide Web ("WWW" or simply the "Web") is a global information medium which users can read and write via computers connected to the Internet. The term is often mistakenly used as a synonym for the Internet itself, but the Web is a service that operates over the Internet, as e-mail does. The history of the Internet dates back significantly further than that of the World Wide Web.
The hypertext portion of the Web in particular has an intricate intellectual history; notable influences and precursors include Vannevar Bush's Memex, IBM's Generalized Markup Language, and Ted Nelson's Project Xanadu.
The concept of a home-based global information system goes at least as far back as "A Logic Named Joe", a 1946 short story by Murray Leinster, in which computer terminals, called "logics," were in every home. Although the computer system in the story is centralized, the story captures some of the feeling of the ubiquitous information explosion driven by the Web.

About the Web History Center

WHY IT MATTERS
In just a decade the vast growth of the Web has utterly transformed the ways that most of us use and store information, perhaps as completely as the printing press did starting half a millennium ago. The process continues, with no end in sight.
Yet many of the records of this historic transformation - and the little known 75-year history of brilliant innovation that led to it - are disappearing. Every day thousands of pages and untold bytes of irreplaceable source material are lost. Software programs that blazed trails in the science of handling information deteriorate on old disks and tapes without backups. New Web professionals are often unaware of prior achievements in their own field, and the resulting ignorance can slow innovation.
Since 1995 interest in various aspects of Web history has gathered momentum, and this has led to some important successes in saving portions of our digital heritage. These include the preservation work of the former Web History Project and especially that of some of the world-class institutions now becoming Web History Center members.
But until now, no single entity has been tracking or tying together those independent efforts. There has been no clearinghouse for information for either archivists or the public, no coordinated preservation initiative, and no easy access to the materials already preserved.
By addressing these unmet needs, the Web History Center will save significant amounts of at-risk materials. Because the main role of the WHC will be to facilitate the preservation of materials by existing institutions rather than duplicate their efforts, from the outset it will have a permanent, tangible impact on how much of the history of our digital heritage is available to future generations.

Introducing the Web History Center

The World Wide Web was designed to bolster the sharing and availability of information. Ironically, the actual records of this technical and cultural transformation are in danger of being lost.
"The good thing about digital media is that you can save everything. The bad thing about digital media is that you can lose everything."- Brewster Kahle, Web pioneer, founder of The Internet Archive
The Web History Center is a non-profit educational organization to make public the history of the World Wide Web and preserve it for posterity.
The Web History Center helps researchers and collection holders to preserve and promote the history of the web era by:
identifying important and at-risk collections;
facilitating the preservation of historical materials;
linking conservation efforts by various institutions and individuals;
making the web's heritage accessible to educators, students, corporations and the general public.
Historical note
The Web History Center has merged with the Web History Project formerly hosted here at webhistory.org. Pages from the Web History Project and the 1997 Web History Day are preserved for historical purposes

A Little History of the World Wide Web

1945
Vannevar Bush writes an article in Atlantic Monthly about a photo-electrical-mechanical device called a Memex, for memory extension, which could make and follow links between documents on microfiche


1960s


Doug Engelbart prototypes an "oNLine System" (NLS) which does hypertext browsing editing, email, and so on. He invents the mouse for this purpose. See the Bootstrap Institute library.
Ted Nelson coins the word Hypertext in A File Structure for the Complex, the Changing, and the Indeterminate. 20th National Conference, New York, Association for Computing Machinery, 1965. See also: Literary Machines, a hypertext bibliography.
Andy van Dam and others build the Hypertext Editing System and FRESS in 1967.


1980
While consulting for CERN June-December of 1980, Tim Berners-Lee writes a notebook program, "Enquire-Within-Upon-Everything", which allows links to be made between arbitrary nodes. Each node had a title, a type, and a list of bidirectional typed links. "ENQUIRE" ran on Norsk Data machines under SINTRAN-III. See: Enquire user manual as scanned images or as HTML page(alt).


1989
March
"Information Management: A Proposal" written by Tim BL and circulated for comments at CERN (TBL). Paper "HyperText and CERN" produced as background (text or WriteNow format).


1990


May
Same proposal recirculated
September
Mike Sendall, Tim's boss, Oks the purchase of a NeXT cube, and allows Tim to go ahead and write a global hypertext system.
October
Tim starts work on a hypertext GUI browser+editor using the NeXTStep development environment. He makes up "WorldWideWeb" as a name for the program. (See the first browser screenshot) "World Wide Web" as a name for the project (over Information Mesh, Mine of Information, and Information Mine).
Project original proposal reformulated with encouragement from CN and ECP divisional management. Robert Cailliau (ECP) joins and is co-author of new version.
November
Initial WorldWideWeb program development continues on the NeXT (TBL) . This was a "what you see is what you get" (wysiwyg) browser/editor with direct inline creation of links. The first web server was nxoc01.cern.ch, later called info.cern.ch, and the first web page http://nxoc01.cern.ch/hypertext/WWW/TheProject.html Unfortunately CERN no longer supports the historical site. Note from this era too, the least recently modified web page we know of, last changed Tue, 13 Nov 1990 15:17:00 GMT (though the URI changed.)
November


Technical Student Nicola Pellow (CN) joins and starts work on the line-mode browser. Bernd Pollermann (CN) helps get interface to CERNVM "FIND" index running. TBL gives a colloquium on hypertext in general.
Christmas


Line mode browser and WorldWideWeb browser/editor demonstrable. Acces is possible to hypertext files, CERNVM "FIND", and Internet news articles.


1991


February
workplan for the purposes of ECP division.


26 February 1991
Presentation of the project to the ECP/PT group.


March


Line mode browser (www) released to limited audience on "priam" vax, rs6000, sun4.


May


Workplan produced for CN/AS group


17 May


Presentation to "C5" Committee. General release of WWW on central CERN machines.


12 June


CERN Computer Seminar on WWW.


August


Files available on the net by FTP, posted on alt.hypertext (6, 16, 19th Aug), comp.sys.next (20th), comp.text.sgml and comp.mail.multi-media (22nd). Jean-Francois Groff joins the project.
October


VMS/HELP and WAIS gateways installed. Mailing lists www-interest (now www-announce) and www-talk@info.cern.ch (see archive) started. One year status report. Anonymous telnet service started.


December
Presented poster and demonstration at Hypertext'91 in San Antonio, Texas (US). W3 browser installed on VM/CMS. CERN computer newsletter announces W3 to the HEP world.
Dec 12: Paul Kunz installs first Web server outside of Europe, at SLAC.


1992


15 January
Line mode browser release 1.1 available by anonymous FTP (see news). Presentation to AIHEP'92 at La Londe (FR).


12 February
Line mode v 1.2 annouced on alt.hypertext, comp.infosystems, comp.mail.multi-media, cern.sting, comp.archives.admin, and mailing lists.


April
29th April: Release of Finnish "Erwise" GUI client for X mentioned in review by TimBL.


May


Pei Wei's "Viola" GUI browser for X test version dated May 15. (See review by TimBL)


At CERN, Presentation and demo at JENC3, Innsbruck (AT). Technical Student Carl Barker (ECP) joins the project.


June
Presentation and demo at HEPVM (Lyon). People at FNAL (Fermi National Accelerator Laboratory (US)), NIKHEF (Nationaal Instituut voor Kern- en Hoge Energie Fysika, (NL)), DESY (Deutsches Elektronen Synchrotron, Hamburg, (DE)) join with WWW servers.


July
Distribution of WWW through CernLib, including Viola. WWW library code ported to DECnet. Report to the Advisory Board on Computing.


August
Introduction of CVS for code management at CERN.


September
Plenary session demonstration to the HEP community at CHEP'92 in Annecy (FR).


November
Jump back in time to a snapshot of the WWW Project Page as of 3 Nov 1992 and the WWW project web of the time, including the list of all 26 resoanably reliable servers, NCSA's having just been added, but no sign of Mosaic.


1993
January
By now, Midas (Tony Johnson, SLAC), Erwise (HUT), and Viola (Pei Wei, O'Reilly Associates) browsers are available for X; CERN Mac browser (ECP) released as alpha. Around 50 known HTTP servers.


February
NCSA release first alpha version of Marc Andreessen's "Mosaic for X". Computing seminar at CERN. The University of Minnesota announced that they would begin to charge licensing fees for Gopher's use, which caused many volunteers and employees to stop using it and switch to WWW.


March
WWW (Port 80 HTTP) traffic measures 0.1% of NSF backbone traffic. WWW presented at Online Publishing 93, Pittsburgh.
The Acceptable Use Policy prohibiting commercial use of the Internet re-interpreted., so that it becomes becomes allowed.


April


April 30: Date on the declaration by CERN's directors that WWW technology would be freely usable by anyone, with no fees being payable to CERN. A milestone document.


July
Ari Luotonen (ECP) joins the project at CERN. He implements access authorisation, proceeds to re-write the CERN httpd server.


July 28-30
O'Reilly hosts first WWW Wizards Workshop in Cambridge Mass (US).


September
WWW (Port 80 http) traffic measures 1% of NSF backbone traffic. NCSA releases working versions of Mosaic browser for all common platforms: X, PC/Windows and Macintosh.
September 6-10: On a bus at a seminar Information at Newcastle University, MIT's Prof. David Gifford suggests Tim BL contact Michael Dertouzos of MIT/LCS as a possible consortium host site.


October
Over 200 known HTTP servers. The European Commission, the Fraunhofer Gesellschaft and CERN start the first Web-based project of the European Union (DG XIII): WISE, using the Web for dissemination of technological information to Europe's less favoured regions.


December
WWW receives IMA award. John Markov writes a page and a half on WWW and Mosaic in "The New York Times" (US) business section. "The Guardian" (UK) publishes a page on WWW, "The Economist" (UK) analyses the Internet and WWW.Robert Cailliau gets go-ahead from CERN management to organise the First International WWW Conference at CERN.


1994
January
O'Reilly, Spry, etc announce "Internet in a box" product to bring the Web into homes.
March
Marc Andreessen and colleagues leave NCSA to form "Mosaic Communications Corp" (later Netscape).
May 25-27
First International WWW Conference, CERN, Geneva. Heavily oversubscribed (800 apply, 400 allowed in): the "Woodstock of the Web". VRML is conceived here. TBL's closing keynote hints at upcoming organization. (Some of Tim's slides on Semantic Web)
June
M. Bangemann report on European Commission Information Superhighway plan. Over 1500 registered servers.
Load on the first Web server (info.cern.ch) 1000 times what it has been 3 years earlier.
July
MIT/CERN agreement to start W3 Organisation is announced by Bangemann in Boston. Press release. AP wire. Reports in Wall Street Journal, Boston Globe etc.
August
Founding of the IW3C2: the International WWW Conference Committee, in Boston, by NCSA and CERN.
September
The European Commission and CERN propose the WebCore project for development of the Web core technology in Europe.
1 October
World Wide Web Consortium founded.
October
Second International WWW Conference: "Mosaic and the Web", Chicago. Also heavily oversubscribed: 2000 apply, 1300 allowed in.
14 December
First W3 Consortium Meeting at M.I.T. in Cambridge (USA).
15 December
First meeting with European Industry and the European Consortium branch, at the European Commission, Brussels.


16 December
CERN Council approves unanimously the construction of the LHC (Large Hadron Collider) accelerator, CERN's next machine and competitor to the US' already defunct SSC (Superconducting Supercollider). Stringent budget conditions are however imposed. CERN thus decides not to continue WWW development, and in concertation with the European Commission and INRIA (the Institut National pour la Recherche en Informatique et Automatique, FR) transfers the WebCore project to INRIA.


1995
February
the Web is the main reason for the theme of the G7 meeting hosted by the European Commission in the European Parliament buildings in Brussels (BE).
March
CERN holds a two-day seminar for the European Media (press, radio, TV), attended by 250 reporters, to show WWW. It is demonstrated on 60 machines, with 30 pupils from the local International High School helping the reporters "surf the Web".
April
Third International WWW Conference: "Tools and Applications", hosted by the Fraunhofer Gesellschaft, in Darmstadt (DE)
June
Founding of the Web Society in Graz (AT), by the Technical University of Graz (home of Hyper-G), CERN, the University of Minnesota (home of Gopher) and INRIA.

Laptop Security - Do Not Panic If You Lost Your Computer

We are in a new generation where desk top computers have given way to the much less bulky laptop. Almost everybody has a personal computer, and every day we add more valuable information to it that we would no doubt hate to lose. Think about what you trust your laptop to carry - emails, photographs, many gigs of music, conversations, downloaded programs, your list of favorite websites along with several files of possibly banking and other personal data. We entrust these things to our laptops while the majority of us most likely never concern ourselves with backing up our hard drive or keeping paper copies of anything; the only source for our valuable photos, music and documents is our laptop, yet we never stop to think about the possibility of it being lost or stolen.

Although many of us will never bother to backup our data we hate the thought - and the possibility - of losing these things. It can take months to replace some of the information on a computer if it was ever to be lost or stolen, while other things we keep on it - like photographs - may be lost for good. Since we have embraced a new lifestyle where our computer travels with us, we must be prepared to defend it like anything else we carry on our person - like a bag from pickpockets. Because of the vastness and value of information that we entrust to our laptops it is nowhere near enough to simply install a lock screen that demands a password; a thief will take the entire notebook regardless, and your photos and music will go along with it.

To have peace of mind about the information you keep on a personal computer, that computer must have a robust security system that will not only deter a thief but will also recover all of your information in case of theft or loss. Imagine never having to worry about losing precious photographs that have no duplicates or files that you have been working on for ages - imagine being prepared in the worst case scenario; a lost or stolen laptop may send a twinge of worry through you, but with a new security programs available on the market you will never have to be troubled for long.

There's a choice of the laptop mobile security software available on the market that will put your mind at ease about the possibility of laptop loss. With such software your information is never lost nor will it enter anybody else's hands. Those programs allow you to recover all your files via an internet account, along with destroying the data beyond restoration on the lost computer; but before you choose to do all that it will tell you exactly where your computer is, just in case you simply left it somewhere close.

A locking screen is the first line of defense with some innovative programs - leave it on when you leave for vacation or even when you're gone for a brief period; an incorrect password entry on this screen immediately prompts an email to be sent to your address - check it from your blackberry or the hotel computer - and you will instantly know someone has tried to unlock your laptop. The email will include a map with the notebook's immediate location, good for tracking a lost computer or being alerted of a theft.

If your laptop has been compromised and you realize that recovery is impossible, your newly installed laptop security software steps up to the plate with a scathe-free solution. Using an online account you can recover absolutely every single file on the lost computer when it connects to the internet and you can also delete all the information so the thief is left with nothing. Imagine recovering precious files, music and photographs despite having lost the actual hardware.

Mobile Safepatrol's laptop security software has everything you need to prevent a bad situation from getting worse. Just because your laptop is lost doesn't mean you have to be; never miss a beat with this security software and guarantee yourself a line of defense against theft or loss. You never know how disabling it is to lose all your information until it actually happens, so don't ever let yourself feel that way.

By visiting the website below, you can gain even more valuable tips, strategies, and resources for your data protection and laptop security. Mobile SafePatrol (http://www.laptopmobilesecurity.com/index.php) allows you to recover all your files via an internet account.

Maya Lem is a writer and successful internet marketer who provides free valuable tips and strategies for your computer security needs.

This article comes with free reprint rights which mean you can use it for your ezines, websites, newsletters, blogs, ebooks, etc. The only requirements are that all links must remain in place and it cannot be modified.

Easy PC - The Incredible Shrinking Computer

The evolution of the personal computer - that is, a computer built around a microprocessor for use by an individual - has had a massive part to play in how people function in their daily lives. From the early prototypes of the early 1970s, reserved almost exclusively for those in academic and research institutions, to the ubiquitous hand-held contraptions of the 21st century, the journey from laboratory to pocket would make for a very lively discussion in itself.

In a nutshell though, the introduction of the microprocessor in the mid 1970s helped to liberate personal computing and paved the way for early pioneers such as IBM, Commodore and Apple to develop machines that would not only be commercially successful, but act as the building blocks for an industry that most people take for granted these days.

Indeed, it may be difficult to imagine life without computers, but it has only been around fifteen years or so since the internet gained a strong foothold across the world and ultimately helped make computers a global phenomenon.

From the moment we wake to when we go to bed at night, we are surrounded by computers of all shapes and sizes, with many mobile phones now acting as pocket computers, equipped with powerful operating systems, internet access, gaming capability and just about anything else a full-scale desktop computer can offer.

Today, it's estimated that there is around one billion networked PCs across the world. Consumerism has gradually shifted from the offline to the online environment and countless people now do their banking, book their holidays, shop for clothes and groceries and even buy their car through the World Wide Web. To say that computers are omnipresent would be an understatement.

For business or pleasure, computers now go with us everywhere. Laptops are great for those who travel often as it enables them to tap into the cyber highway wherever they can access a Wi-Fi or Ethernet connection. Those with a more portable requirement can use mobile phones and PDAs to connect to the Web.

These latter pocket-sized options may be great for email or occasional internet usage, but the screen size and power limitations can be restrictive for those who need something with a little more kick.

Netbooks are becoming an increasingly popular option with people who want a halfway house between a full-sized laptop and a handheld device. Designed primarily for web-browsing and applications that don't require a lot of power, netbooks now account for around 20% of the total laptop market and are normally considerably cheaper than traditional desktop or laptop computers.

Discount netbooks have made personal computers affordable for everyone and are proving to be instrumental in the drive to make the internet available to people from all social and financial backgrounds.

In the UK alone, it's estimated that 65% of all UK households are currently wired up to the World Wide Web and, with the introduction of new affordable and portable personal computers, this figure can only increase.

Victoria Cochrane writes for a digital marketing agency. This article has been commissioned by a client of said agency. This article is not designed to promote, but should be considered professional content.

Get Fast and Easy Computer Repair

All around the world today computers have proved to be a part of our daily life. It is being used in every sphere of our lives today and thus there has been a huge increase in its usage. As a result of this there has been an increase in the problems and queries and regarding the output of the computer and its functioning. This led to the emergence of a big market opportunity in the field of computer repairs.

As you can see today that be it home or work or any other field of our daily life computers have become an irreplaceable part of everyone's daily life. The computer technology has undergone many changes since it came into use. Thus, as the technology has improved so has the complexity of the machine. This has led to increase in the demand of the computer repairs specialist who can help the users with efficient onsite computer repairs.

There are many companies which provide online repair service for computers with the help of their technicians who are well certified. These technicians first examine the issue thus offer help for all sort of computer problems like system repair, software and hardware. The process generally used by these techies to sort out your computer problem is through remote access. There is another form of computer support in which you receive instructions from the technicians and then perform the tasks accordingly on your system.

If you are looking for computer repairs Sydney or mobile computer repairs then you can search for it online and get the technician to visit you in person or get the services over the phone itself. It's advisable though to always take assistance of the remote repair service if you are looking for a long term resolution. The computer technicians who offer you this service are generally a part of the vendor of computer hardware's and software's.

These specialists note the client requests or queries using the remote computer access from their workstations. They have a user ID and a password through which they access the client's computer. They then analyze the problem and resolve it and further if required they clean, install and modify the system and also check the hardware and software of it.

These technicians are very efficient and they deal with your issues as a customer directly. These specialists listen to your queries carefully and then they walk you through the entire process of resolving your problem. These online services are available to you 24x7 and hence any point of time you face any problem you can call them and ask for assistance. The remote computer access support is more popular with the clients as it happens to provide efficient and quality solution and is also cost effective. Thus you get immediate resolution and not have to wait for weeks to get the problem resolved. It is also hassle free as you do not have to have prior appointments and then carry you system all the way to the computer repairs shop.

8 Keys to PLC Systems Integration - Keeping it Simple & Affordable

Selecting an OLE, MES or SCADA software solution can be complex, however following these 8 steps when upgrading or implementing OLE, MES or SCADA will simplify the process and make it less expensive.

Let's look at this process from a high level. Modem manufacturing supervisory control systems can integrate with just about any control device imaginable, most commonly PLCs (Programmable Controllers) or other devices with their own specific protocols used to control some part of your manufacturing. The key term used to refer to this capability is a standard is called OPC. This allows all industrial automation systems to be integrated regardless of past product or technology selections of the electrical engineers who designed the system.

Any plant floor supervisory or data acquisition system can be done more affordably than ever before if the right approach is taken and the right partnerships are made.

1. Start Small Without Getting Suckered. Many of the traditional offerings charge a low fee for small tag counts and client run-times (log ins) and then charge much more for higher tag counts and run-times. New competitive products charge nothing for run-times/users and include unlimited tag bases without breaking the budget. Many of the sales pitches from the leaders are geared at OEE (Overall Equipment Effectiveness), which attempts to make one KPI (Key Performance Indicator) to measure productivity, out of the few most important ones. This is a noble goal but the reality in most cases is that it will take a lot of work to get there and will not be a drop-in solution.

2. It's the Network Dummy. Don't over pay for software and ignore the network infrastructure. Hardware and software costs are coming down across the board and this is especially true of network equipment. Old hubs and overrated network switches have no place in a modern Ethernet network. It was once acceptable to have a certain amount of data loss on your network; this is no longer true. Routers and managed switches do not have to cost a bundle either, allowing for real security. Diagnostics in the device now let us see that we no longer have any data loss in the network connection. 3. Get the Right Tools. Find tools to help document existing equipment, documentation and related software. Doing this the old fashioned way is a waste of time, effort, and money. The rewards go far beyond a software roll out. Passive solutions that work, like network port scanners, can help you locate and diagram the network in hours instead of weeks. You will then be able put existing related material against those devices and publish them for the organization. The reward for this effort will be realized every time you quickly find answers in the system.

4. Understand What You Are Buying. Who is the man behind the curtain? OPC is a standard over which no major players has ever had control aside from the body that publishes it, the OPC foundation. Many have packaged their solutions with licensed offerings from third parties. These third party solutions are certified and tested by the software foundation. Many of the platforms rely on code libraries that are GPL (General Public License) and which cost them nothing. It is not uncommon for a PLC (Programmable Logic Controller) product to be based on Linux or free BSD that are also GPL. These free or low cost solutions dramatically lower the cost for new entrants into the market.

5. Don't Get Fooled by the Name. As we have seen recently no company is too big to fail. This is just as true for the big players in the industrial automation market. Some of their best customers are going through incredibly trying times. Many of the big industrial automation firms are already struggling in a market that is increasingly competitive to make their acquisitions and licenses offerings bring in the expected revenue. Outsourcing and layoffs are happening and often important knowledge goes out the door as well.

6. Not All Platforms Are Built for the Web. Web launched applications built on platforms and languages built for web technology are key in assuring that secure, functional and manageable solutions can be developed. Everyone will assure you that you can access the product via the web. It is security and licensing that will make or break this as a reality. For example Microsoft charges licensing fees for active directory users on servers. IT departments have a strong sense of ownership over Active Directory, allowing for security and authentication (Computer / Server Log-in) to a network resource. Web application platforms will have their own security model that function separately from the office domain.

7. Be Skeptical of the "Legacy." Many of the next generation packages from market leaders have a start fresh approach to address this. It comes as no surprise that large companies have trouble reinventing themselves around these new offerings. It is not uncommon to see complete false starts and forced migrations to competing products after acquisitions. The market leaders all have an incentive to bring current customers to the new base platform. Don't get caught up in the next newest thing that was made to look simply like the next new version.

8. Focus on Total Solution Cost. Not Initial Cost Many software companies spend a great deal of time and money trying to corner the market with low up-front fees. Eventually this strategy falls apart. The key is to find an integrator that has the experience and knowledge to provide the best solution at the best total cost.

All Computers Price Comparison - Points to Keep in Mind

Computers are an inseparable part of our lives today. From the tiniest of businesses, to large corporate houses or even day to day household chores, all require computers. They've have made our life easier and work faster.

As our dependency on computers is increasing by the day, so is their variety in the market. You have numerous types, brands, sizes, shapes and features of computers to choose from, with varied prices.

As a consumer, this vast variety at times gets very confusing. It is therefore advisable to always go in for an All Computers Price Comparison. This is a much required thing to do, as an All Computers Price Comparison will enable you to make a better buy, in all regards. This is because -

• Firstly, All Computers Price Comparison available on many sites provides you the list of different brands available within your budget. You can get a set from anywhere between $400- $1490.
• The All Computers Price Comparison sites give you the brands which support the kind of computer you want to buy. There are many to choose from. HP, Fujitsu, Lenovo, IBM being the more established ones amongst others like - Shuttle, Compaq, Acer, Dell, etc
• On the All Computers Price Comparison site you can also survey on the basis of the following features - RAM, hard drive capacity, processor type or system type,
• There are other accessories and features as well, which these sites inform you about in detail. Some of these are - built in CD/DVD player/Writer, life of the battery, the software's you get with your set, its Video memory, the Speaker quality and number, its monitor's size and resolution etc.
• They also provide you with feedback's and star ratings on every product.

Here are a few things to be kept in mind while consulting the All Computers Price Comparison option.

• Computers are of various kinds - laptop, desktop, palm top, tablet PCs etc, each for a different purpose and price. Therefore know what you want depending upon the need.
• Stick to your budget. Anything ranging from $600 - $800 would prove to be a decent buy for computers.
• There are numerous cabinet covers, designs and shapes in computers to choose from.
• Be specific and selective for the features and accessories. Know whether a 126 MB RAM or a 1GB one suits you, Pentium 3 or Celeron processor, an 80GB or 160GB hard disk would do. Do not over-indulge.
• Register with your All Computers Price Comparison site beforehand if required.
• Don't forget the warranties, guaranties or licenses, whichever is applicable.
• Talk about after sales service and know the nearest dealer in town.

Computer Training - Continuing Problems and a New Solution

Most people say they are users of Word (or Excel, Powerpoint, Photoshop, Illustrator, InDesign, etc.). Yet if I then ask them to perform some relatively simple task beyond just typing, they usually seem totally stumped. Unfortunately, as someone famous may or may not have said, "if they don't know what they don't know, how will they know what they need to know". This is the computer training conundrum.

Employers will assume that self-professed "users" of common programs understand and employ the such programs efficiently and effectively in their every day work. We generally expect people's computer skills to be similar to reading and writing; most people who join the workforce should be reasonably good at it. This may be why costly computer training in common computer tasks is rarely provided in the workplace. To add to this litany of woes, universities no longer appear teach the usage of popular software, expecting this to have been done at school. I'm not aware that schools are filling this gap, and many lecturers and school teachers I know have little to no computer skills themselves.

So ... the majority people have learned their computer skills through trial and error, reference manuals, help from friends & colleagues, intuition, osmosis, alien abduction, etc. Apart from the latter, which makes it difficult to sit still for long periods, everything is fine and dandy except you remain only knowing what you know, and what you know may well be wrong, inefficient, a compatibility method left-over from some ancient version, and so on. There is still no understanding of relevance, importance, perspective, and how to be self-reliant when things go pear-shaped. (For readers outside the British Commonwealth, pear-shaped is the London opposite of Australia's she's apples [mate], or otherwise just means shaped like a pear). This ezine article seems to be getting fruit-y, a term which ...

But, hey, shouldn't everyone be able to pick up how to do things properly. Computers, operating systems and programs are designed with GUI's and ease-of-use in mind. Eh? Yeah, RIGHT! Think of the hundreds of millions of people around the world in business, science, public service, working at 80% efficiency on their computers (a charitable estimate). If we could raise this figure to just 90%, imagine the massive effect on world GDP. The additional time devoted to achieving constructive thought and action rather than battling sofware might contribute to the cure for cancer, world peace, colonies on Mars, smarter Miss Universe contestants, the end of hip-hop, etc. If those ambitions are too lofty, it would certainly lead to happier, less stressed and more productive staff, employers, students, writers, business people, etc. (notice I didn't include administrators, politicians and clergy in the list).

If you think that this is a cynical appraisal of general computer usage, what will I say about 'Computer Training'? It's got to be good thing, hasn't it? We-e-e-ell ...

With experience of IT support dating back way back to mainframes, punched cards, pterodactyls, etc., having attended numerous technical and business courses, and more recently specialising as a trainer, I have observed that:

  • training choices by individuals are often poor (e.g. learning Photoshop is pointless if you have no natural flair for design, Excel is dangerous unless you are good with numbers)
  • each of us has a preferred best method of learning, e.g. overview vs. detail, graphic vs verbal, serious vs humourous, lecture vs discussion, theory vs practical, etc., but rarely does a course employ the training method that best suits us
  • traditional computer training typically involves a whole exhausting day in a darkened air-conditioned room with people of widely-ranging skills and experience. For some, the course may be either too advanced, or else maddeningly slow
  • training companies often set arbitrary expertise levels regarding what is taught, such as Introduction/Intermediate/Advanced, but then remarkably allow potential customers to self stream. Big mistake. This may be good for revenue but it is a headache for the class trainer and is very annoying for attendees when it becomes obvious that some in the class have badly misjudged their own level of competence

My Little Red Book of Computer Training says:

  1. For all-day courses, the afternoon is pretty much a waste of everyone's time. a. Morning peak alertness cannot be maintained, post- lunch is the Bermuda Triangle of mental agility, and by 3pm everyone is watching the clock and itching to leave b. Increasing wear and tear on the presenter causes decreasing classroom energy levels
  2. When a course involves attendees from different organisations: a. They compete for the presenter's attention to focus on their specific issues which are often irrelevant to the rest of the group b. There's always at least ONE person who slows the course or is just annoying and disruptive
  3. With large groups, individuals cannot receive personal attention
  4. Most training organisations offer inflexible course schedules while also reserving the right to cancel courses due to insufficient numbers ... and they often exercise that right (as a contract trainer, I've been "cancelled" one working day before a day that I had to reserve weeks in advance)
  5. When attendees return to their organisations, they are obliged to catch up on their everyday work that backed up while they were away on training (the insinuation from colleagues will be that it was "time off"). At least 24 hours will elapse before they can put anything they've learned into practice, assuming they can remember it.
  6. 80% of general computer work uses only 20% of the available features, yet so many training courses get side-tracked on "bells and whistles". Why get hung up on long-hand menu-based or function-key procedures whereas a right-click will suffice most of the time to provide context-sensitive options.
  7. Once people understand the contextual use of a program, have overviewed its capabilities, strengths and weaknesses, and used the basic features with confidence, they only need pointers regarding the best methods to find out how to do other things. My view therefore is that "Advanced" courses will never help anyone who can't absorb and apply the basics, while the more capable users should be able to figure the additional features for themselves (just like after buying and using a basic DIY toolkit, you add new tools one at a time only if and when your work demands them, and you can afford the time and cost)
  8. Presenters should be brave enough to highlight software weaknesses (are they scared that Microsoft and Adobe will beat them up?) e.g.: yes, you CAN do Excel-style maths in Word, but you'd be MAD to do so as by default there is no real-time calculation; yes, you can create macros in Office products, but will you and others gamble on opening a file when the security sirens are going off ... is it your macro or a real virus?
  9. Relevant computer history and perspective helps people to understand that no program is ever perfect, why some applications seem to have been built by committee (answer: they HAVE!), and that checking your work is important (we tend to assume that computers can be trusted) ...
  10. ... problems are rarely due to stupidity, so people need some basic Get Out Of Jail Free procedures for when things start to fall apart and the deadline is looming.

To address as many shortcoming as possible, I have put my money where my mouth is and created a boutique training facility that is unique on the Gold Coast (Australia):

  • Class sizes are a maximum of four
  • Classes are restricted to people from the same organisation, or groups of friends. Attendees therefore never have to deal with strangers, allowing them more freedom to admit difficulties and to ask questions. It also provides me with the freedom to tailor the presentation to cover their personal scenarios and even use their own sample files that they are encouraged to bring to the class. Anyone who is "slow" tends to be naturally supported by colleagues without any ill-feeling
  • Class bookings are never cancelled. It even means one-to-one training if I accept the booking.
  • My short courses never exceed four hours. A morning session is preferred so that the course finishes by lunchtime. People leave fresh, motivated, and able to put their new skills into immediate practice before the end of their working day.
  • I focus on being efficient and productive in the core element and whatever issues the attendees want explored, and I highlight the traps for the unwary.
  • The training room supports learning by using natural daylight and fresh air whenever possible
  • On leaving, attendees get a set of relevant browser bookmarks and carefully selected freeware to assist them in further self-learning and everyday computing tasks

Greg Barnett started his computers and communications career in 1976 and has witnessed many of the quantum leaps in technology ... as well as many of its ghastly pratfalls.

Technology is supposed to make our lives easier, but the frustrated rending of clothes and gnashing of teeth shows no sign of abating. Computers and software are idiot savants that continue to get faster and more flexible, but not necessarily smarter.

Back in the 70s, only technical people got to use computers and were experts in specific limited technical fields. Such people were trained regularly. These days, a secretary is expected to use Word, Excel, Publisher, Outlook, Explorer, manage files and printing, all possibly without any training. Does he/she use them? Yes. Does he/she use them well? Mmm ... what's YOUR opinion?

Slow Computer Problems

It is frustrating when you have slow computer problems. It can be a number of things and is caused by just using your computer. Computers are a great tool but they require maintenance and the right programs to do the maintenance. There are programs out there that will do the maintenance, all you have to do is schedule it.

As you surf the internet, your computer accumulates spyware from websites. The spyware not only spy's on you but it slows your computer down extremely. Spyware can be removed and prevented by spyware remover programs. They are programs that to a scan at least once a week to check for it and to remove it.

Regular computer maintenance is important such as disk defragment and scan disk. Registry optimizers speed up your computer because they clean the registry getting rid of things you don't need and also fixing problems. These things need to be run because your computer gets cluttered. It is just like cleaning a house, when the house gets cluttered it is hard to move around. This is the same for a computer. Keeping everything cleaned and maintained will make sure that it is working the best.

Maintenance should be done at least once a week. If you let things build up longer, it will take more time to fix your computer. Computer places charge a lot for these services and you have to keep going back. By doing things yourself you save money and ensure maintenance is being done every week. Hiring someone will be expensive and you won't be getting it done as much because of the cost.

Slow Computer Problems

It is frustrating when you have slow computer problems. It can be a number of things and is caused by just using your computer. Computers are a great tool but they require maintenance and the right programs to do the maintenance. There are programs out there that will do the maintenance, all you have to do is schedule it.

As you surf the internet, your computer accumulates spyware from websites. The spyware not only spy's on you but it slows your computer down extremely. Spyware can be removed and prevented by spyware remover programs. They are programs that to a scan at least once a week to check for it and to remove it.

Regular computer maintenance is important such as disk defragment and scan disk. Registry optimizers speed up your computer because they clean the registry getting rid of things you don't need and also fixing problems. These things need to be run because your computer gets cluttered. It is just like cleaning a house, when the house gets cluttered it is hard to move around. This is the same for a computer. Keeping everything cleaned and maintained will make sure that it is working the best.

Maintenance should be done at least once a week. If you let things build up longer, it will take more time to fix your computer. Computer places charge a lot for these services and you have to keep going back. By doing things yourself you save money and ensure maintenance is being done every week. Hiring someone will be expensive and you won't be getting it done as much because of the cost.