Tuesday, 8 October 2013

Virtualization


Virtualization

Introduction-


Virtualization is the process of simulating virtual environment for any software or program to run on different platforms or hardware system, or can also be defined as -
                  Virtualization is a process of creating a virtual version (in place of original) of something such as an operating system, a Server, a Storage device or network resources.

Virtualization can be seen as part of an overall trend in enterprise IT that includes autonomic computing, a scenario in which the Information Technology environment will be able to manage itself based on perceived activity, and utility computing, in which computer processing power is seen as a utility that clients can pay for only as needed. The usual goal of virtualization is to centralize administrative tasks while improving scalability and overall hardware-resource utilization. With virtualization, several operating systems can be run in parallel on a single CPU. This parallelism tends to reduce overhead costs and differs from multitasking, which involves running several programs on the same OS.
Operating system virtualization is the use of software to allow a piece of hardware to run multiple operating system images at the same time. The technology got its start on mainframes decades ago, allowing administrators to avoid wasting expensive processing power.

Type of Virtualization


Reasons for virtualization


  • In the case of server consolidation, many small physical servers are replaced by one larger physical server to increase the utilization of costly hardware resources such as CPU. Although hardware is consolidated, typically OSes are not. Instead, each OS running on a physical server becomes converted to a distinct OS running inside a virtual machine. The large server can "host" many such "guest" virtual machines. This is known as Physical-to-Virtual (P2V) transformation.
  • Consolidating servers can also have the added benefit of reducing energy consumption. A typical server runs at 425 W and VMware estimates an average server consolidation ratio of 10:1.
  • A virtual machine can be more easily controlled and inspected from outside than a physical one, and its configuration is more flexible. This is very useful in kernel development and for teaching operating system courses.
  • A new virtual machine can be provisioned as needed without the need for an up-front hardware purchase.
  • A virtual machine can easily be relocated from one physical machine to another as needed. For example, a salesperson going to a customer can copy a virtual machine with the demonstration software to his laptop, without the need to transport the physical computer. Likewise, an error inside a virtual machine does not harm the host system, so there is no risk of breaking down the OS on the laptop.
  • Because of the easy relocation, virtual machines can be used in disaster recovery scenarios.

Hardware virtualization


Hardware virtualization or platform virtualization refers to the creation of a virtual machine that acts like a real computer with an operating system. Software executed on these virtual machines is separated from the underlying hardware resources. For example, a computer that is running Microsoft Windows may host a virtual machine that looks like a computer with the Ubuntu Linux operating system; Ubuntu-based software can be run on the virtual machine.
In hardware virtualization, the host machine is the actual machine on which the virtualization takes place, and the guest machine is the virtual machine. The words host and guest are used to distinguish the software that runs on the physical machine from the software that runs on the virtual machine. The software or firmware that creates a virtual machine on the host hardware is called a hypervisor or Virtual Machine Manager.
Different types of hardware virtualization include:
  1. Full virtualization: Almost complete simulation of the actual hardware to allow software, which typically consists of a guest operating system, to run unmodified.
  2. Partial virtualization: Some but not all of the target environment is simulated. Some guest programs, therefore, may need modifications to run in this virtual environment.
  3. Paravirtualization: A hardware environment is not simulated; however, the guest programs are executed in their own isolated domains, as if they are running on a separate system. Guest programs need to be specifically modified to run in this environment.

 Memory virtualization

  Memory virtualization allows networked, and therefore distributed, servers to share a pool of memory to overcome physical memory limitations, a common bottleneck in software performance. With this capability integrated into the network, applications can take advantage of a very large amount of memory to improve overall performance, system utilization, increase memory usage efficiency, and enable new use cases. Software on the memory pool nodes (servers) allows nodes to connect to the memory pool to contribute memory, and store and retrieve data. Management software and the technologies of memory overcommitment manage shared memory, data insertion, eviction and provisioning policies, data assignment to contributing nodes, and handles requests from client nodes. The memory pool may be accessed at the application level or operating system level. At the application level, the pool is accessed through an API or as a networked file system to create a high-speed shared memory cache. At the operating system level, a page cache can utilize the pool as a very large memory resource that is much faster than local or networked storage.

 Storage Virtualization: It is commonly used in storage area network (SAN).It helps to perform the tasks of recovery in less time and easily, archiving and backup. It can be placed in different levels of SAN.
While there are numerous, listed here are some core benefits:
  • Enterprise Continuity – Lower downtime associated with vision important apps and programs.
  • Improved upon Utilization – Increase efficiency by increasing the usage of server assets coming from 25% (average) to 60%+.
  • Cost is reduced – Need much less components, energy and area.
  • Managing is simplified– Deploy, administration and monitoring from one unit.

 Network Virtualization

     network virtualization is the process of combining hardware and software network resources and network functionality into a single, software-based administrative entity, a virtual network. Network virtualization involves platform virtualization, often combined with resource virtualization.
Network virtualization is categorized as either external, combining many networks, or parts of networks, into a virtual unit, or internal, providing network-like functionality to the software containers on a single system. Whether virtualization is internal or external depends on the implementation provided by vendors that support the technology.

 Components of a virtual network

Various equipment and software vendors offer network virtualization by combining any of the following:
  • Network hardware, such as switches and network adapters, also known as network interface cards (NICs)
  • Network elements such as firewalls and load balancers
  • Networks, such as virtual LANs (VLANs) and containers such as virtual machines (VMs)
  • Network storage devices
  • Network M2M elements such as telecommunications 4G HLR and SLR devices
  • Network mobile elements such as laptops, tablets, and cell phones
  • Network media, such as Ethernet and Fibre Channel

 Server Virtualization

     Server virtualization is the masking of server resources, including the number and identity of individual physical servers, processors, and operating systems, from server users. The server administrator uses a software application to divide one physical server into multiple isolated virtual environments. The virtual environments are sometimes called virtual private servers, but they are also known as guests, instances, containers or emulations.

As you can see that all workstation's are running there respective operating system and working on, while for a enterprise requirement all the data processed on local client side workstation should be stored in central server after being received by  Virtualized Central Server, this is very important so the user can get his interdependency of choosing his working workstation operating system environment.


 The intention is to spare the user from having to understand and manage complicated details of server resources while increasing resource sharing and utilization and maintaining the capacity to expand later.

Virtualization indicates partitioning one server directly into many virtual servers called Virtual Machines (VMs). Using Hypervisor technology, networking, storage, and computing resources are collectively and delivered to Virtual machine. Even though sharing the resources of same physical server, 

 Desktop Virtalization

Desktop virtualization is the concept of separating the logical desktop from the physical machine.
One form of desktop virtualization, virtual desktop infrastructure (VDI), can be thought as a more advanced form of hardware virtualization. Rather than interacting with a host computer directly via a keyboard, mouse, and monitor, the user interacts with the host computer using another desktop computer or a mobile device by means of a network connection, such as a LAN, Wireless LAN or even the Internet. In addition, the host computer in this scenario becomes a server computer capable of hosting multiple virtual machines at the same time for multiple users.
As organizations continue to virtualize and converge their data center environment, client architectures also continue to evolve in order to take advantage of the predictability, continuity, and quality of service delivered by their Converged Infrastructure. For example, companies like HP and IBM provide a hybrid VDI model with a range of virtualization software and delivery models to improve upon the limitations of distributed client computing. Selected client environments move workloads from PCs and other devices to data center servers, creating well-managed virtual clients, with applications and client operating environments hosted on servers and storage in the data center. For users, this means they can access their desktop from any location, without being tied to a single client device. Since the resources are centralized, users moving between work locations can still access the same client environment with their applications and data For IT administrators, this means a more centralized, efficient client environment that is easier to maintain and able to more quickly respond to the changing needs of the user and business
     Thin clients, which are seen in desktop virtualization, are simple and/or cheap computers that are primarily designed to connect to the network. They may lack significant hard disk storage space, RAM or even processing power, but many organizations are beginning to look at the cost benefits of eliminating “thick client” desktops that are packed with software (and require software licensing fees) and making more strategic investments. Desktop virtualization simplifies software versioning and patch management, where the new image is simply updated on the server, and the desktop gets the updated version when it reboots. It also enables centralized control over what applications the user is allowed to have access to on the workstation.

Application virtualization

 Application virtualization is software methodology that encapsulates application software from the underlying operating system on which it is executed. A fully virtualized application is not installed in the traditional sense, although it is still executed as if it were. The application behaves at runtime like it is directly interfacing with the original operating system and all the resources managed by it, but can be isolated or sandboxed to varying degrees. In this context, the term "virtualization" refers to the artifact being encapsulated (application), which is quite different to its meaning in hardware virtualization, where it refers to the artifact being abstracted (physical hardware).
 he application remains unaware that it accesses a virtual resource instead of a physical one. Since the application is now working with one file instead of many files and registry entries spread throughout the system, it becomes easy to run the application on a different computer and previously incompatible applications can be run side-by-side.


Benefits of application virtualization

Allows applications to run in environments that do not suit the native application:
  • e.g. Wine allows some Microsoft Windows applications to run on Linux.
  • e.g. CDE, a lightweight application virtualization, allows Linux applications to run in a distribution agnostic way

Operating System Virtualization

    Operating System virtualization is the use of software that allows a piece of hardware to run multiple operating system images at the same time.
In operating system-level virtualization, a physical server is virtualized at the operating system level, enabling multiple isolated and secure virtualized servers to run on a single physical server. The "guest" OS environments share the same OS as the host system – i.e. the same OS kernel is used to implement the "guest" environments. Applications running in a given "guest" environment view it as a stand-alone system. The pioneer implementation was FreeBSD jails; other examples include Solaris Containers, OpenVZ, Linux-VServer, LXC.

like this above this machine is currenly in Ubuntu Environment but in real world its haring its hardware with Windows 7.

Full Virtualization

 In computer science, full virtualization is a virtualization technique used to provide a certain kind of virtual machine environment, namely, one that is a complete simulation of the underlying hardware. Full virtualization requires that every salient feature of the hardware be reflected into one of several virtual machines – including the full instruction set, input/output operations, interrupts, memory access, and whatever other elements are used by the software that runs on the bare machine, and that is intended to run in a virtual machine. In such an environment, any software capable of execution on the raw hardware can be run in the virtual machine and, in particular, any operating systems. The obvious test of virtualization is whether an operating system intended for stand-alone use can successfully run inside a virtual machine.
Similarly, full virtualization was not quite possible with the x86 platform until the 2005-2006 addition of the AMD-V and Intel VT-x extensions (see x86 virtualization). Many platform virtual machines for the x86 platform came very close and claimed full virtualization even prior to the AMD-V and Intel VT-x additions. Examples include Adeos, Mac-on-Linux, Parallels Desktop for Mac, Parallels Workstation, VMware Workstation, VMware Server (formerly GSX Server), VirtualBox.

Data Virtualization

approach to data management that allows an application to retrieve and manipulate data without requiring technical details about the data, such as how it is formatted or where it is physically located
Database virtualization may use a single ODBC-based DSN to provide a connection to a similar virtual database layer.

Benefits of Data Virtualization

  • Reduce risk of data errors
  • Reduce systems workload through not moving data around
  • Increase speed of access to data on a real-time basis
  • Significantly reduce development and support time
  • Increase governance and reduce risk through the use of policies Reduce data storage required

Database virtualization

Database virtualization in this case we create multiple layers of a single database and put to be accessed by user all over the required platform or area. a single database is manipulated and edited all around and all of those changes are saved in the central main database or in technical term we can say that- Data virtualization is the decoupling of the database layer, which lies between the storage and application layers within the application stack. Virtualization of the database layer enables a shift away from the physical, toward the logical or virtual. Virtualization enables compute and storage resources to be pooled and allocated on demand. This enables both the sharing of single server resources for multi-tenancy, as well as the pooling of server resources into a single logical database or cluster. In both cases, database virtualization provides increased flexibility, more granular and efficient allocation of pooled resources, and more scalable computing.

Virtualization in education

Virtualization in field of education is playing a great significant role where ever it is applied Successfully managing multiple sites and an array of faculty, staff, and student needs is becoming increasingly difficult as budgets decrease and equipment and facilities age. Use virtualization in education to help cut costs, increase efficiency, and adapt quickly and automatically to changing requirements.
Choose from:
  • Hardware virtualization. Run multiple operating systems (for example, Linux and Windows) on a single server.
  • Application virtualization. Rapidly deploy applications, even those that conflict with each other, with low administrative overhead.
  • Presentation virtualization. Execute an application on one computer and present it with another.
  • Desktop virtualization. Run multiple operating systems (OSs) on a single desktop. Centrally execute Windows 7 in virtual machines (VMs) running on servers.
  • Virtualization management. Manage your entire virtual and physical infrastructures with a unified set of tools.
All the products and technologies we use in virtualization solutions have a common, policy-based management system that helps to ease the load on system managers.

Benefits
  • Help reduce your total cost of ownership (TCO) and increase your return on investment (ROI) across your entire computing infrastructure.
  • Turn computing assets into on-demand services to improve your business agility.
  • Maintain "one application, one server" while reducing physical server sprawl through server consolidation and provisioning.
  • Provide optimal desktop solutions for different user needs while still meeting IT requirements.
  • Centrally provision and manage both physical and virtual resources.
  • Help ensure effective business continuity and disaster recovery by compartmentalizing workflows and maintaining failover plans.
  • Rapidly model and test different environments without significant expansion of hardware and physical resources.
  • Improve security by isolating computing layers and minimizing the chance of widespread failure.


 thanks....
Pleae share it.....


No comments:

Post a Comment

Proper way to install nvidia 390 fix error

Proper way to install nvidia 390 if you see any error in the process look below; command  sudo apt purge --autoremove '*nvidia*...