The evolution of computing and innovation of technology
Thursday, 10 April 2014 00:59
In the early 1880s Herman Hollerith created a device to help automate the United States of America’s census process. It was the punch-card tabulator. The idea was simple but brilliant. By punching a series of holes in paper, information about a certain population could be stored. For example a hole in a predefined location in the paper would indicate that a family had two children.
Seeing the advantage of such a process, the Census Bureau put Hollerith’s machine to good use in the 1890s census. This new phenomena improved the tallying procedure, making it much more quicker for the Bureau to tabulate the census taken than it had in the 1880 round, even though the country’s population had grown.
Having proved its value in speeding up calculations and reduction of cost of the census (savings almost 10 times greater than the Bureau had expected), the punch-card tabulator attracted the attention of the owners of big businesses such as railroads, insurance agencies, banks, and mass-market manufacturers and retailers.
Seeing the commercial potential of his invention, Hollerith, established the Tabulating Machine Company. The main aim was to sell tabulators to businesses. Seeing the growth of the need for such services, Hollerith’s firm merged with the Computer-Tabulating-Recording Company, to supply even larger business machines.
Dawn of the IT industry
After some time, 13 years later, a talented young manager, named Thomas J. Watson, was brought into run the business. Once he had taken over the reins of the company, he changed the company’s name to the more impressive sounding International Business Machines Corporation (IBM). This saw the dawn of the information technology industry.
Retrospectively it would seem inevitability that people, at that time, would have thought that computers would be the backbone of modern business. However, strangely, the reverse was true. People at that time had much scepticism about the machine’s usefulness. So much so that Howard Aiken, a distinguished Harvard mathematician and a member of the US government’s National Research Council and the creator of the Havard Mark I Computer, had commented to Edward Cannon, of the US National Bureau of standards, in 1948 that the idea that there would be a big market for computers is “foolishness. Furthermore he had been documented as stating that “there never would be enough work for more than two of these computers”.
However as technology evolved with the advent of the tiny transistors, the big bulky vacuum tubes were replaced. This saw the birth of what we now know as the desktop computers. It is ironic to note that, as with the former views, the dominant computer companies of the day from IBM to Digital, paid little attention to these quirky new machines. For the PCs were seen too week for any use.
Microsoft in the mix
It took the brilliance of a college dropout named Bill Gates to see potential in the use of these personal computers in business. In 1975, Gates together with his high-school friend Paul Allen founded a little company named Microsoft, to write software for the newly-invented PC. Gates envisaged that these machines would not only find a place inside business but that, because of its versatility and low cost, it would supplant the bulky mainframe as the centre of corporate computing.
Nevertheless it can be seen that such advances still did not allow the computer to meet its full potential. The issue being that workstation could not compete with mini and mainframes on the basis of the power of a single machine. This was solved by in the advent of networks of machines. In which it was noted that the collective power of the PC was greater than the sum of the parts. In 1990 a slew of application service providers‖ emerged, with considerable venture-capital backing, in hopes of providing businesses with software programs over the Internet.
However this good intention was met with, at that time, a significant “barrier to entry”, where a significant chasm existed between communication speeds and computer processing speeds. To explain this, two laws were coined. One is Moore’s law. Gordon E. Moore stated in 1965 that over the history of computing hardware, the number of transistors on integrated circuits doubles approximately every two years. The other is Grove’s Law. Andrew Grove stated, while chip density doubles every 18 months (Moore’s Law), telecommunications bandwidth doubles every 100 years.
However in the recent past we can see that the Grove’s law has been progressively being negated. With the genesis and subsequent improvement of communication services, it can be noted that the next stage in the evolution of computer services has come, that is providing computer as a utility. As data now can be transferred quickly at a cheaper rate, the full power of computers can finally be delivered to users from afar. It doesn’t matter much whether the server computer running your program is in the data centre down the hall or in somebody else’s data centre on the other side of the country. All the machines are now connected and shared – they’re one machine.
What is utility computing? It is the packaging of computing resources, such as computation, storage and services, as a metered service.
Imagine the day in which a person would just plug their laptop, palm top or tablet PCs to a wall socket and obtain services pre-negotiated between the service provider and his/her respective business, so that to carry out his/her daily work.
The service provider may provide the company with comprehensive package. The package may come pre-bundled with computer hardware, included standard servers, CPUs, monitors, input devices and network cables, internet access, including web servers and browsing software.
Access to the processing power of a ‘supercomputer’. Some corporations have hefty computational requirements. For example, a financial company might need to process rapidly-changing data gathered from the stock market. While a normal computer might take hours to process complicated data, a supercomputer could complete the same task much more quickly.
Off-site data storage, which is also called cloud storage. There are many reasons a company might want to store data off-site. If the company processes a lot of data, it might not have the physical space to hold the data servers it needs. An off-site backup is also a good way to protect information in case of a catastrophe. For example, if the company’s building were demolished in a fire, its data would still exist in another location.
At the end of the month, this business in question would then receive an invoice. This invoice would depict charges based on usage and not on a fixed flat fee. This could be compared to the modern electricity tariff. Where usage for a certain amount of units would be billed at a predefine amount.
In fact with the dawn of cloud computing we can see that this vision has moved from a probability to a definite possibility. Cloud computing, in a broad concept, is the provisioning of services over the internet. At the end of the month, the users of such services are billed for an exact quantum describing the amount of usage of the said services (that the user had prescribed for). Therefore in cloud computing resources (such as hardware, applications, computing power, data storage and/or platforms) are available to users, without them actually knowing the exact location of where these services originate from.
The advantage of this paradigm is multifaceted. One, in its very nature it allows scalability. This enables a company to request the services for one to a million users in matter of days. Thereby allowing, for example, a company to take into account when their business has peek or cyclical sales and plan their resources allocations accordingly. Such that when more users are required, by just requesting for additional resources, the company can make arrangements for such.
"The utility computing model is all about using technology only when you need it, for as long as you need it. I think that this shift to utility computing is inevitable, and it is progressively becoming true, as this is a convenient, flexible and economical alternative to the traditional large scale data centre maintained within the organisation"
Second, it allows innovation and agility. Typically companies cannot take advantage of the full potential that IT brings. As IT consistently evolve, systems are generally upgraded in a slower pace ( due to restriction in funding/budgeting). Where major feature upgrades are done about every couple of years, which results in the company needing to do an overhaul of the current system, which would have a companywide impact. With the concepts of economies of scale, the sourcing company would be able to provide cutting edge IT services to companies that require such cutting-edge services, in mere minutes.
Third is visible reduction in upfront IT capital expenditures. As company would only bear a utility expense in the form of periodic subscription to a service. As such making a shorten deployment effort by the company.
At present, in every company, there have always been notable gaps between technology investment and ensuring rise in productivity. Therefore it can be seen that this style of utility computing could have significant impact for small to midsize companies in certain industries.
The utility computing model is all about using technology only when you need it, for as long as you need it. I think that this shift to utility computing is inevitable, and it is progressively becoming true, as this is a convenient, flexible and economical alternative to the traditional large scale data centre maintained within the organisation.
We must not forget that when Edison first demonstrated his incandescent light bulb on 31 December 1879 in Menlo Park, he had stated: “We will make electricity so cheap that only the rich will burn candles.” Because of this audacious thought we can now say, without a shadow of a doubt, that a world without the profession of electricity as a service cannot be envisaged!
(The writer is the Secretary to ISACA – Sri Lanka Chapter, and working as an Information Systems Auditor at SJMS Associates – a firm of Chartered Accountants and independent correspondent firm to Deloitte Touche Tohmatsu in Sri Lanka.)