Tuesday, 31 December 2013

Top 10 Supercomputer

Top ten Supercomputers in World

  • 1 Tianhe-2 (MilkyWay-2)
  • 2 Titan - Cray XK7
  • 3 Sequoia - BlueGene/Q
  • 4 K computer
  • 5 Mira
  • 6 Piz Dain
  • 7 Stampede
  • 8 JUQUEEN
  • 9 Vulcan
  • 10 SuperMUC


Lets talk bout what is supercomputer

According to Wikipedia-

“A supercomputer is a computer at the frontline of contemporary processing capacity – particularly speed of calculation.”

Well if you didn’t got it then read this one


“Supercomputers are the high capacity computer used for task which required huge computing power and data processing they gigantic in size” They were first came in view in 1960s during these day this idea was new the first company brought the very first supercomputer was Cray.

Lets explore each supercomputer

1 Tianhe-2


This is a Chinese supercomputer build in year June 2013.

Located at Guangzhou, China has IntelXeon E5, Xeon Phi Architecture which consumes almost 17.6MW in operating and extra 24MV in cooling, the Operating system being used is Kylin Linux which has been developed by china itself using open source linux kernel this linux version is being used by Chinese agencies and Chinese federal departments computers.

Total memory of Tianhe-2 is 1375Tib being powered by 1000TiB CPU and 375 Tib Coprocessor,
Storage is 12.4 PB(PentaBytes) According to NUDT, Tianhe-2 will be used for simulation, analysis, and government security applications



Memory- 1,375 TiB (1,000 TiB CPU and 375 TiB Coprocessor)

Storage- 12.4 PB
Speed- 33.86 PFLOPS

Cost- 2.4 billion Yuan (390 million USD)

Purpose- Research and education.

2 Titan


Titan is a supercomputer built by Cray at Oak Ridge National Laboratory for use in a variety of science projects. Titan is an upgrade of Jaguar, a previous supercomputer at Oak Ridge that usesgraphics processing units(GPUs) in addition to conventional central processing units (CPUs). It is the first such hybrid to perform over 10 petaFLOPS. The upgrade began in October 2011, commenced stability testing in October 2012 and it became available to researchers in early 2013. The initial cost of the upgrade wasUS$60 million, funded primarily by the United States Department of Energy.
Active Became operational October 29, 2012
Sponsors US DOE and NOAA (<10%)
Operators Cray Inc.
Location Oak Ridge National Laboratory
Architecture
  18,688 AMD Opteron 6274 16-core CPUs
  18,688 Nvidia Tesla K20X GPUs
Power 8.2 MW
Operating system Cray Linux Environment
Space 404 m2 (4352 ft2)
Memory
693.5 TiB (584 TiB CPU and 109.5 TiB GPU)
Storage 40 PB, 1.4 TB/s IO Lustre filesystem
Speed 17.59 petaFLOPS (LINPACK)
        27 petaFLOPS theoretical peak
Cost $97 million
Ranking TOP500: #2, June 2013
Purpose Scientific research
Legacy Ranked 1 on TOP500 when built.
First GPU based supercomputer to perform over 10 petaFLOPS
Web site www.olcf.ornl.gov/titan/

3 Sequoia

IBM Sequoia is a petascale Blue Gene/Q supercomputer constructed by IBM for the National Nuclear Security Administration as part of the Advanced Simulation and Computing Program (ASC). It was delivered to the Lawrence Livermore National Laboratory (LLNL) in 2011 and was fully deployed in June 2012.
Operators LLNL
Location Livermore, Alameda County,
         SFBA, Northern California,
         United States
Power 7.9 MW

Operating system CNK operating system
                 Red Hat Enterprise Linux
Space 3,000 square feet (280 m2)
Memory 1.5 PiB
Speed 16.32 PFLOPS
Purpose NW&UN, astronomy, energy, human genome, and climate change

4 K computer
In hune 2001 ranked K the world's fastest supercomputer, with a computation speed of over 8 petaflops, and in November 2011, K became the first computer to top 10 petaflops.
Active Operational June 2011
Sponsors MEXT, Japan
Operators Fujitsu
Location RIKEN Advanced Institute for Computational Science
Architecture 88,128 SPARC64 VIIIfxprocessors, Tofu interconnect
Power 12.6 MW
Operating system Linux
Speed 10.51 petaflops (Rmax)
Ranking TOP500: 4th, as of June 2013

5 Mira

Mira is a petascale Blue Gene/Q supercomputer. As of June 2013, it is listed on TOP500 as the fifth-fastest supercomputer in the world. It has a performance of 8.16 petaflops and consumes 3.9MW in power. The supercomputer was constructed by IBM forArgonne National Laboratory's Argonne Leadership Computing Facility with the support of the United States Department of Energy, and partially funded by the National Science Foundation. Mira will be used for scientific research, including studies in the fields of material science, climatology, seismology, and computational chemistry. The supercomputer is being utilized initially for sixteen projects, selected by the Department of Energy.


Operators Argonne National Laboratory
Power 3.9 MW
Operating system Linux
Speed 8.16 PFLOPS
Ranking TOP500: 5, 2013-06
Purpose Material science, Climatology,Seismology, Computational chemistry

6 Piz daint

Site: Swiss National Supercomputing Centre (CSCS)

Manufacturer: Cray Inc.
Cores: 115,984
Linpack Performance (Rmax) 6,271.0 TFlop/s
Theoretical Peak (Rpeak) 7,788.9 TFlop/s
Power: 2,325.00 kW
Memory:
Interconnect: Aries interconnect
Operating System: Cray Linux Environment

7 Stampede 



Site:




Manufacturer:Dell
Cores: 462,462
Linpack Perfo  rmance (Rmax) 5,168.1 TFlop/s
Theoretical Peak (Rpeak) 8,520.1 TFlop/s
Power:4,510.00 kW
Memory:192,192 GB
Interconnect: Infiniband FDR
Operating System: Linux
Compiler: Intel
Math Library: MKL
MPI: MVAPICH2




8 JUQUEEN


JUGENE (Jülich Blue Gene) was a supercomputer built by IBM for Forschungszentrum Jülich in Germany. It was based on the Blue Gene/P and succeeded the JUBL based on an earlier design. It was at the introduction the second fastest computer in the world, and the month before its decommissioning in July 2012 it was still at the 25th position in the TOP500 list. The computer was owned by the "Jülich Supercomputing Centre" (JSC) and the Gauss Centre for Supercomputing. With 65,536 PowerPC 450 cores, clocked at 850 MHz and housed in 16 cabinets the computer reaches a peak processing power of 222.8 TFLOPS (Rpeak). With an official Linpack rating of 167.3 TFLOPS (Rmax) JUGENE took second place overall and is the fastest civil/commercially used computer in the TOP500 list of November 2007
Site: Forschungszentrum Juelich (FZJ)
System URL: http://www.fz-juelich.de/ias/jsc/EN/Expertise/Supercomputers/JUQUEEN/JUQUEEN_node.html
Manufacturer: IBM
Cores: 458,752
Linpack Performance (Rmax) 5,008.9 TFlop/s
Theoretical Peak (Rpeak) 5,872.0 TFlop/s
Power: 2,301.00 kW
Memory: 458,752 GB
Interconnect: Custom Interconnect
Operating System: Linux

9 Vulcan 

Manufacturer: IBM
Cores: 393,216
Linpack Performance (Rmax) 4,293.3 TFlop/s
Theoretical Peak (Rpeak) 5,033.2 TFlop/s
Power: 1,972.00 kW
Memory: 393,216 GB
Interconnect: Custom Interconnect
Operating System: Linux



10 SuperMUC




The SuperMUC is the name of a new supercomputer of the Leibniz-Rechenzentrum (Leibniz Supercomputing Centre) in Garching near Munich which will provide a sustained computing power in the petaflop/s regime
Operators Leibniz-Rechenzentrum
Location Garching, Germany
Architecture 18,432 Intel Xeon 8-core CPUs
Operating system SUSE Linux Enterprise Server
Memory 288 TB
Storage 12 PB
Speed 2.90 petaFLOPS
Ranking TOP500: #6, November 2012
Web site www.lrz.de/services/compute/supermuc/
The SuperMUC will have 18,432 Intel Xeon Sandy Bridge-EP processors running in IBM System x iDataPlex servers with a total of 147,456 cores and a peak performance of about 3 petaFLOPS (3 × 1015 FLOPS). The main memory will be 288 terabytes (288 × 1012 bytes) together with 12 petabytes (12 × 1015 bytes) of hard disk space based on the IBM General Parallel File System (GPFS). It will also use a new form of cooling that IBM developed, called Aquasar, that uses hot water to cool the processors, a design that should cut cooling electricity usage by 40 percent, IBM claims.


Reference
===> Wikipedia.com
===> top500.com

Tuesday, 22 October 2013

Solaborate



Solaborate

 Official logo

Solaborate Launches Public Beta Inviting Professionals to Use First Technology Collaboration Platform





The Los Angeles Based Startup That Received $1M Funding in May Takes its Platform Public and Launches Mobile App



Los Angeles – October 22, 2013 – Solaborate, a social and collaboration platform designed specifically for technology professionals, today welcomes wider use of its real-time communication and productivity tools with the launch of its public beta and a new mobile app for Android.



In May, Solaborate received $1M in seed funding, which the company has used to perfect its platform, taking it from private to public beta in just five short months. As part of this move, Solaborate now offers audio calling, video conferencing, file sharing and screen sharing along with social and collaboration capabilities via easy, powerful integrations with Facebook, LinkedIn, Twitter and Google+.



Solaborate is a new way for the technology community to be more productive. Technology professionals and companies are invited to connect, collaborate and discover opportunities while creating an ecosystem around products and services. The public beta will feature top global technology companies such as Apple, Google, Microsoft, SAP, Facebook, IBM, Samsung and Cisco. Users can now post on a wall, follow, message, share rich media content and much more.



Unlike other social and collaboration platforms, Solaborate allows video calling straight from the browser and simultaneous use of screen sharing with video calling, without downloads or plug-ins. 



Users can also send instant messages and share photos and rich media files straight from their browser. Companies can use Solaborate to support and demonstrate products and services while offering a more personalized customer experience.


Today Solaborate also released a mobile app, available for download on Google Play. Integrating most of its currently available Web features, Solaborate’s convenient mobile app features a user-friendly approach and a familiar interface. Users can access personal, company, product and service profiles while sending and receiving messages, posting on walls, asking questions, providing commentary, discovering people, companies and more. 


The Solaborate app has been designed by keeping in mind that the navigation and overall view come in exactly proper way on your smartphone or tablet screen.

Pick up your phone and check it out yourself and then tell me about you views in comments below.

Well as you can see in the screenshot below each option is available at right place where it should be for a best user experience.



         Navigation                                 Profile View                                       Wall
 


“We are excited to announce that Solaborate is now publicly available both on the web and mobile to everyone in the technology ecosystem. Our mission is to connect technology professionals and provide a dedicated place for you, your company and your products and services by providing all the tools and services to allow you to be more productive. We understand that it’s important for technology professionals to be able to stay connected and engaged, whether they’re at the office or on the go, ” said Labinot Bytyqi, the Founder and CEO of Solaborate. 



Signup is available via email or through your existing social media accounts like LinkedIn, Facebook or Google+. Visit us at www.solaborate.com to join or download the mobile app for your Android device. It’s free and always will be.





About Solaborate:

Solaborate is a social and collaboration platform dedicated to technology professionals and companies to connect, collaborate, discover opportunities, and create an ecosystem around products and services. Solaborate provides technology professionals with a central place with the right tools and services to collaborate in real time and be more productive. Solaborate goes beyond internal integration to enable interaction with people outside the company, including experts, potential hires and even customers. It differentiates itself in the world of social networking by focusing on the needs of tech professionals. 


Learn more about Solaborate and its capabilities by watching this video.




This is going to be a great place for all professional to hang out online for seeking jobs and for increase there network. whatever you think you need to increase your professional is here on Solaborate and this just the starting as the developer team get the massive response for market and Online traffic they gonna add more intersting features and great User Interface although its not bad at all in its present days but it will be more perfect and good looking in upcoming day giving you a great experience like u could hardy even imagine.

When i joined Solaborate i was amaze to see so many stuff on a single site i guess for being a computer geek and future IT professional its best place for me to add starts in my profile.


Wednesday, 9 October 2013

Risk's Of Using Cracker Windows 8



Risk's Of Using Cracker Windows 8 

 


Hi Guys lets have a serious talk about this topic you will be answered for the following questions-

1- Is it safe using hacked/cracked Microsoft Windows 8 for me?

2- Is i am risking my privacy using such cracked versions of Windows?

3- Why Windows 8 is hard to crack and why we can't patch its Windows Authentication Remover Program yourself (like we did in windows 7)?

4- Windows has all new in built anti virus name Windows Defender which also a big headache for black hat hackers!


i will be explaining all these point above with there respective solutions sit tight :)
ok without taking any further sec. lets get started

well while reading this answer you will get all thing clear regarding to other questions too.

1-Is it safe using hacked/cracked Microsoft Windows 8 for me?


Answer is No because by using such cracked version of windows8 operating system you are simply putting your system security at risk.
to understand why we have to get to the main logic behind the availability of such version's online free of costs all over the #Internet and other media.

In the world of security evaluation and analysis which includes both good and bad hackers they all do same job that is as the new software comes in light either online means laucnched or trial version/evaluation version come in market they all get it try there debugging skill and tries to find security hole in the core architactural programming of the software(software here means any software system operating system or application software)

so as they got to a point that it is almost impossible to hack windows 8 because of its advance kernel they tried a different technique and that was to stop windows to run windows 8 genuine program and give full access of the system and operating system functionality to the user, to do that they developed a small software package for window 8 professional and for other releases too with there own exploit and released that with iso of windows 8 online free of cost. which you might have downloaded and cracked the windows by running that simple program after installing windows 8 on your system.

so here we are- you think that bad hacker did this  so that you can means everyone can get windows 8 free of cost but the actual reason behind cracking windows 8 version was to create a security hole in the firewall of windows 8.
as i told you before that windows 8 has a inbuilt anti virus programmed but this was not a big issue for hacker to bypass that anti virus program the real problem arise when they came to know that the inbuilt antivirus of windows 8 has been programmed in the kernel of windows 8 itself.

well that is why it was almost impossible for hacker to bypass the security protocols of Winodws 8, they decided that if there can't find the hole in security of windows 8 then they will create hole.

yes you got it that cracking program is the tool to create security holes in your system.

i hope now you have got your answer why you are putting your online and system security at risk by using those free cracked windows 8 available online.

My Advise to those who are using such cracked versions of Windows8

       Do not put your and your company's secuirty at risk by using such no authentic and pirated copy of operating system. if you can then buy a genuine version from Microsoft site or if you are not willing to pay couple of hundred bucks for it then migrate towards the free software like - Linux, BSD, other...

to know about Linux go to http://linux.com
and to buy online windows 8 go to -
http://windows.microsoft.com/en-IN/windows/buy
thanks for reading-
share it if you can :)


Tuesday, 8 October 2013

Virtualization


Virtualization

Introduction-


Virtualization is the process of simulating virtual environment for any software or program to run on different platforms or hardware system, or can also be defined as -
                  Virtualization is a process of creating a virtual version (in place of original) of something such as an operating system, a Server, a Storage device or network resources.

Virtualization can be seen as part of an overall trend in enterprise IT that includes autonomic computing, a scenario in which the Information Technology environment will be able to manage itself based on perceived activity, and utility computing, in which computer processing power is seen as a utility that clients can pay for only as needed. The usual goal of virtualization is to centralize administrative tasks while improving scalability and overall hardware-resource utilization. With virtualization, several operating systems can be run in parallel on a single CPU. This parallelism tends to reduce overhead costs and differs from multitasking, which involves running several programs on the same OS.
Operating system virtualization is the use of software to allow a piece of hardware to run multiple operating system images at the same time. The technology got its start on mainframes decades ago, allowing administrators to avoid wasting expensive processing power.

Type of Virtualization


Reasons for virtualization


  • In the case of server consolidation, many small physical servers are replaced by one larger physical server to increase the utilization of costly hardware resources such as CPU. Although hardware is consolidated, typically OSes are not. Instead, each OS running on a physical server becomes converted to a distinct OS running inside a virtual machine. The large server can "host" many such "guest" virtual machines. This is known as Physical-to-Virtual (P2V) transformation.
  • Consolidating servers can also have the added benefit of reducing energy consumption. A typical server runs at 425 W and VMware estimates an average server consolidation ratio of 10:1.
  • A virtual machine can be more easily controlled and inspected from outside than a physical one, and its configuration is more flexible. This is very useful in kernel development and for teaching operating system courses.
  • A new virtual machine can be provisioned as needed without the need for an up-front hardware purchase.
  • A virtual machine can easily be relocated from one physical machine to another as needed. For example, a salesperson going to a customer can copy a virtual machine with the demonstration software to his laptop, without the need to transport the physical computer. Likewise, an error inside a virtual machine does not harm the host system, so there is no risk of breaking down the OS on the laptop.
  • Because of the easy relocation, virtual machines can be used in disaster recovery scenarios.

Hardware virtualization


Hardware virtualization or platform virtualization refers to the creation of a virtual machine that acts like a real computer with an operating system. Software executed on these virtual machines is separated from the underlying hardware resources. For example, a computer that is running Microsoft Windows may host a virtual machine that looks like a computer with the Ubuntu Linux operating system; Ubuntu-based software can be run on the virtual machine.
In hardware virtualization, the host machine is the actual machine on which the virtualization takes place, and the guest machine is the virtual machine. The words host and guest are used to distinguish the software that runs on the physical machine from the software that runs on the virtual machine. The software or firmware that creates a virtual machine on the host hardware is called a hypervisor or Virtual Machine Manager.
Different types of hardware virtualization include:
  1. Full virtualization: Almost complete simulation of the actual hardware to allow software, which typically consists of a guest operating system, to run unmodified.
  2. Partial virtualization: Some but not all of the target environment is simulated. Some guest programs, therefore, may need modifications to run in this virtual environment.
  3. Paravirtualization: A hardware environment is not simulated; however, the guest programs are executed in their own isolated domains, as if they are running on a separate system. Guest programs need to be specifically modified to run in this environment.

 Memory virtualization

  Memory virtualization allows networked, and therefore distributed, servers to share a pool of memory to overcome physical memory limitations, a common bottleneck in software performance. With this capability integrated into the network, applications can take advantage of a very large amount of memory to improve overall performance, system utilization, increase memory usage efficiency, and enable new use cases. Software on the memory pool nodes (servers) allows nodes to connect to the memory pool to contribute memory, and store and retrieve data. Management software and the technologies of memory overcommitment manage shared memory, data insertion, eviction and provisioning policies, data assignment to contributing nodes, and handles requests from client nodes. The memory pool may be accessed at the application level or operating system level. At the application level, the pool is accessed through an API or as a networked file system to create a high-speed shared memory cache. At the operating system level, a page cache can utilize the pool as a very large memory resource that is much faster than local or networked storage.

 Storage Virtualization: It is commonly used in storage area network (SAN).It helps to perform the tasks of recovery in less time and easily, archiving and backup. It can be placed in different levels of SAN.
While there are numerous, listed here are some core benefits:
  • Enterprise Continuity – Lower downtime associated with vision important apps and programs.
  • Improved upon Utilization – Increase efficiency by increasing the usage of server assets coming from 25% (average) to 60%+.
  • Cost is reduced – Need much less components, energy and area.
  • Managing is simplified– Deploy, administration and monitoring from one unit.

 Network Virtualization

     network virtualization is the process of combining hardware and software network resources and network functionality into a single, software-based administrative entity, a virtual network. Network virtualization involves platform virtualization, often combined with resource virtualization.
Network virtualization is categorized as either external, combining many networks, or parts of networks, into a virtual unit, or internal, providing network-like functionality to the software containers on a single system. Whether virtualization is internal or external depends on the implementation provided by vendors that support the technology.

 Components of a virtual network

Various equipment and software vendors offer network virtualization by combining any of the following:
  • Network hardware, such as switches and network adapters, also known as network interface cards (NICs)
  • Network elements such as firewalls and load balancers
  • Networks, such as virtual LANs (VLANs) and containers such as virtual machines (VMs)
  • Network storage devices
  • Network M2M elements such as telecommunications 4G HLR and SLR devices
  • Network mobile elements such as laptops, tablets, and cell phones
  • Network media, such as Ethernet and Fibre Channel

 Server Virtualization

     Server virtualization is the masking of server resources, including the number and identity of individual physical servers, processors, and operating systems, from server users. The server administrator uses a software application to divide one physical server into multiple isolated virtual environments. The virtual environments are sometimes called virtual private servers, but they are also known as guests, instances, containers or emulations.

As you can see that all workstation's are running there respective operating system and working on, while for a enterprise requirement all the data processed on local client side workstation should be stored in central server after being received by  Virtualized Central Server, this is very important so the user can get his interdependency of choosing his working workstation operating system environment.


 The intention is to spare the user from having to understand and manage complicated details of server resources while increasing resource sharing and utilization and maintaining the capacity to expand later.

Virtualization indicates partitioning one server directly into many virtual servers called Virtual Machines (VMs). Using Hypervisor technology, networking, storage, and computing resources are collectively and delivered to Virtual machine. Even though sharing the resources of same physical server, 

 Desktop Virtalization

Desktop virtualization is the concept of separating the logical desktop from the physical machine.
One form of desktop virtualization, virtual desktop infrastructure (VDI), can be thought as a more advanced form of hardware virtualization. Rather than interacting with a host computer directly via a keyboard, mouse, and monitor, the user interacts with the host computer using another desktop computer or a mobile device by means of a network connection, such as a LAN, Wireless LAN or even the Internet. In addition, the host computer in this scenario becomes a server computer capable of hosting multiple virtual machines at the same time for multiple users.
As organizations continue to virtualize and converge their data center environment, client architectures also continue to evolve in order to take advantage of the predictability, continuity, and quality of service delivered by their Converged Infrastructure. For example, companies like HP and IBM provide a hybrid VDI model with a range of virtualization software and delivery models to improve upon the limitations of distributed client computing. Selected client environments move workloads from PCs and other devices to data center servers, creating well-managed virtual clients, with applications and client operating environments hosted on servers and storage in the data center. For users, this means they can access their desktop from any location, without being tied to a single client device. Since the resources are centralized, users moving between work locations can still access the same client environment with their applications and data For IT administrators, this means a more centralized, efficient client environment that is easier to maintain and able to more quickly respond to the changing needs of the user and business
     Thin clients, which are seen in desktop virtualization, are simple and/or cheap computers that are primarily designed to connect to the network. They may lack significant hard disk storage space, RAM or even processing power, but many organizations are beginning to look at the cost benefits of eliminating “thick client” desktops that are packed with software (and require software licensing fees) and making more strategic investments. Desktop virtualization simplifies software versioning and patch management, where the new image is simply updated on the server, and the desktop gets the updated version when it reboots. It also enables centralized control over what applications the user is allowed to have access to on the workstation.

Application virtualization

 Application virtualization is software methodology that encapsulates application software from the underlying operating system on which it is executed. A fully virtualized application is not installed in the traditional sense, although it is still executed as if it were. The application behaves at runtime like it is directly interfacing with the original operating system and all the resources managed by it, but can be isolated or sandboxed to varying degrees. In this context, the term "virtualization" refers to the artifact being encapsulated (application), which is quite different to its meaning in hardware virtualization, where it refers to the artifact being abstracted (physical hardware).
 he application remains unaware that it accesses a virtual resource instead of a physical one. Since the application is now working with one file instead of many files and registry entries spread throughout the system, it becomes easy to run the application on a different computer and previously incompatible applications can be run side-by-side.


Benefits of application virtualization

Allows applications to run in environments that do not suit the native application:
  • e.g. Wine allows some Microsoft Windows applications to run on Linux.
  • e.g. CDE, a lightweight application virtualization, allows Linux applications to run in a distribution agnostic way

Operating System Virtualization

    Operating System virtualization is the use of software that allows a piece of hardware to run multiple operating system images at the same time.
In operating system-level virtualization, a physical server is virtualized at the operating system level, enabling multiple isolated and secure virtualized servers to run on a single physical server. The "guest" OS environments share the same OS as the host system – i.e. the same OS kernel is used to implement the "guest" environments. Applications running in a given "guest" environment view it as a stand-alone system. The pioneer implementation was FreeBSD jails; other examples include Solaris Containers, OpenVZ, Linux-VServer, LXC.

like this above this machine is currenly in Ubuntu Environment but in real world its haring its hardware with Windows 7.

Full Virtualization

 In computer science, full virtualization is a virtualization technique used to provide a certain kind of virtual machine environment, namely, one that is a complete simulation of the underlying hardware. Full virtualization requires that every salient feature of the hardware be reflected into one of several virtual machines – including the full instruction set, input/output operations, interrupts, memory access, and whatever other elements are used by the software that runs on the bare machine, and that is intended to run in a virtual machine. In such an environment, any software capable of execution on the raw hardware can be run in the virtual machine and, in particular, any operating systems. The obvious test of virtualization is whether an operating system intended for stand-alone use can successfully run inside a virtual machine.
Similarly, full virtualization was not quite possible with the x86 platform until the 2005-2006 addition of the AMD-V and Intel VT-x extensions (see x86 virtualization). Many platform virtual machines for the x86 platform came very close and claimed full virtualization even prior to the AMD-V and Intel VT-x additions. Examples include Adeos, Mac-on-Linux, Parallels Desktop for Mac, Parallels Workstation, VMware Workstation, VMware Server (formerly GSX Server), VirtualBox.

Data Virtualization

approach to data management that allows an application to retrieve and manipulate data without requiring technical details about the data, such as how it is formatted or where it is physically located
Database virtualization may use a single ODBC-based DSN to provide a connection to a similar virtual database layer.

Benefits of Data Virtualization

  • Reduce risk of data errors
  • Reduce systems workload through not moving data around
  • Increase speed of access to data on a real-time basis
  • Significantly reduce development and support time
  • Increase governance and reduce risk through the use of policies Reduce data storage required

Database virtualization

Database virtualization in this case we create multiple layers of a single database and put to be accessed by user all over the required platform or area. a single database is manipulated and edited all around and all of those changes are saved in the central main database or in technical term we can say that- Data virtualization is the decoupling of the database layer, which lies between the storage and application layers within the application stack. Virtualization of the database layer enables a shift away from the physical, toward the logical or virtual. Virtualization enables compute and storage resources to be pooled and allocated on demand. This enables both the sharing of single server resources for multi-tenancy, as well as the pooling of server resources into a single logical database or cluster. In both cases, database virtualization provides increased flexibility, more granular and efficient allocation of pooled resources, and more scalable computing.

Virtualization in education

Virtualization in field of education is playing a great significant role where ever it is applied Successfully managing multiple sites and an array of faculty, staff, and student needs is becoming increasingly difficult as budgets decrease and equipment and facilities age. Use virtualization in education to help cut costs, increase efficiency, and adapt quickly and automatically to changing requirements.
Choose from:
  • Hardware virtualization. Run multiple operating systems (for example, Linux and Windows) on a single server.
  • Application virtualization. Rapidly deploy applications, even those that conflict with each other, with low administrative overhead.
  • Presentation virtualization. Execute an application on one computer and present it with another.
  • Desktop virtualization. Run multiple operating systems (OSs) on a single desktop. Centrally execute Windows 7 in virtual machines (VMs) running on servers.
  • Virtualization management. Manage your entire virtual and physical infrastructures with a unified set of tools.
All the products and technologies we use in virtualization solutions have a common, policy-based management system that helps to ease the load on system managers.

Benefits
  • Help reduce your total cost of ownership (TCO) and increase your return on investment (ROI) across your entire computing infrastructure.
  • Turn computing assets into on-demand services to improve your business agility.
  • Maintain "one application, one server" while reducing physical server sprawl through server consolidation and provisioning.
  • Provide optimal desktop solutions for different user needs while still meeting IT requirements.
  • Centrally provision and manage both physical and virtual resources.
  • Help ensure effective business continuity and disaster recovery by compartmentalizing workflows and maintaining failover plans.
  • Rapidly model and test different environments without significant expansion of hardware and physical resources.
  • Improve security by isolating computing layers and minimizing the chance of widespread failure.


 thanks....
Pleae share it.....


Proper way to install nvidia 390 fix error

Proper way to install nvidia 390 if you see any error in the process look below; command  sudo apt purge --autoremove '*nvidia*...