History and Evolution of Cloud Computing
Cloud computing also referred to as "the cloud" is the delivery of on-demand computing resources, everything from apps to data centers, over the internet on a pay for use basis.
The concept of cloud computing was born around the 1950 were large-scale mainframes with high-volume processing power became available. In order to make efficient use of the computing power of mainframes, the practice of time-sharing or resource pooling evolve.
Dumb terminal, back in the days, facilitates access to the mainframes, and multiple users were able to access the same data storage layers and CPU power from any terminal.
In 1970 with the release of an operating system called virtual machine, it became possible for mainframes to have multiple systems or virtual machines on a single physical node.
The VM operating system evolves from 1950 application of shared access of mainframe by allowing multiple distinct compute environments to exist on the same physical hardware.
Each virtual machine hosted a guest operating system that behaved as though they had their own memory, CPU, and hard drives, even though these were shared resources.
Virtualization, became a huge catalyst for evolution in computing and communication.
20 years ago, a physical hardware was quite expensive, and with the massification of internet and more accessibility, the need to make hardware more cost viable, servers were virtualized into shared hosting environments, virtual private servers' and virtual dedicated servers, using the same type of functionality provided by the virtual machine operating system.
A hypervisor allows a company that need x number of physical system to run their apps to take one physical node and split it into a multiple virtual system.
We can define a Hypervisor as a small software layer that enables multiple operating systems to run alongside each other, sharing the same physical computing resources.
The hypervisor also separated a virtual machine in logical order assigning each its own slice of underlying computing power, memory and storage, preventing the virtual machine from interfering with each other.
In case one operating system suffer a security compromises or a crash, the others keep working.
As technologies and hypervisors evolve and were able to share and deliver resources reliably, some companies decided to make the cloud benefits accessible to users who didn't have many physical servers to create their own cloud computing infrastructure.
Thanks to these, users could order cloud resources from a large pool of available resources and pay for them on a per-use basis, also known as pay as you go.
This allows to change from a capex model to an opex model that is cash flow friendly.
This make the way company's manage resources because now they don't need to necessarily have a huge capital expenditure in hardware, they can just pay for computer resources as a when needed.
Key concepts
Mainframes: A mainframe computer, informally called a mainframe or big iron,[1] is a computer used primarily by large organizations for critical applications like bulk data processing for tasks such as censuses, industry and consumer statistics, enterprise resource planning, and large-scale transaction processing. A mainframe computer is large but not as large as a supercomputer and has more processing power than some other classes of computers, such as minicomputers, servers, workstations, and personal computers. Most large-scale computer-system architectures were established in the 1960s, but they continue to evolve. Mainframe computers are often used as servers.
Resource pooling: In computer science, a pool is a collection of resources that are kept[clarification needed] ready to use, rather than acquired on use and released[clarification needed] afterwards. In this context, resources can refer to system resources such as file handles, which are external to a process, or internal resources such as objects. A pool client requests a resource from the pool and performs desired operations on the returned resource. When the client finishes its use of the resource, it is returned to the pool rather than released and lost.[clarification needed]
The pooling of resources can offer a significant response-time boost in situations that have high cost associated with resource acquiring, high rate of the requests for resources, and a low overall count of simultaneously used resources. Pooling is also useful when the latency is a concern, because a pool offers predictable times required to obtain resources since they have already been acquired. These benefits are mostly true for system resources that require a system call, or remote resources that require a network communication, such as database connections, socket connections, threads, and memory allocation. Pooling is also useful for expensive-to-compute data, notably large graphic objects like fonts or bitmaps, acting essentially as a data cache or a memoization technique.
Operating system:An operating system (OS) is system software that manages computer hardware, software resources, and provides common services for computer programs.
Time-sharing operating systems schedule tasks for efficient use of the system and may also include accounting software for cost allocation of processor time, mass storage, printing, and other resources.
For hardware functions such as input and output and memory allocation, the operating system acts as an intermediary between programs and the computer hardware,[1][2] although the application code is usually executed directly by the hardware and frequently makes system calls to an OS function or is interrupted by it. Operating systems are found on many devices that contain a computer – from cellular phones and video game consoles to web servers and supercomputers.
The dominant general-purpose[3] personal computer operating system is Microsoft Windows with a market share of around 76.45%. macOS by Apple Inc. is in second place (17.72%), and the varieties of Linux are collectively in third place (1.73%).[4] In the mobile sector (including smartphones and tablets), Android's share is up to 72% in the year 2020.[5]
Virtual machine (VM): In computing, a virtual machine (VM) is the virtualization/emulation of a computer system. Virtual machines are based on computer architectures and provide functionality of a physical computer. Their implementations may involve specialized hardware, software, or a combination.
Virtual machines differ and are organized by their function, shown here:
- System virtual machines (also termed full virtualization VMs) provide a substitute for a real machine. They provide functionality needed to execute entire operating systems. A hypervisor uses native execution to share and manage hardware, allowing for multiple environments which are isolated from one another, yet exist on the same physical machine. Modern hypervisors use hardware-assisted virtualization, virtualization-specific hardware, primarily from the host CPUs.
- Process virtual machines are designed to execute computer programs in a platform-independent environment.
Some virtual machine emulators, such as QEMU and video game console emulators, are designed to also emulate (or "virtually imitate") different system architectures thus allowing execution of software applications and operating systems written for another CPU or architecture. Operating-system-level virtualization allows the resources of a computer to be partitioned via the kernel. The terms are not universally interchangeable.
Capex model v/s Opex model: Capital expenditures (CAPEX) are major purchases a company makes that are designed to be used over the long term. Operating expenses (OPEX) are the day-to-day expenses a company incurs to keep its business operational.
This information comes from the course "introduction to the cloud" available for free on cognitiveclass.ai
Comentarios
Publicar un comentario