Como Criar Uma Maquina Virtual No Windows 7 32 Bits

0 views
Skip to first unread message

Christin Baus

unread,
Jun 30, 2024, 9:51:53 AM6/30/24
to wratanvico

O sistema at instala e roda, mas, algumas funcionalidades no rodam adequadamente, por exemplo, o mdulo de emisso de boleto no abre. J consultamos a Prodam e a ATSI (Lenice) e fomos informados que no abre devido a verso do Sistema Operacional ser 64bits.

Quanto ao Virtual Box, na verdade o macete pra fazer o contrario : permitir que o Virtual box instale sistemas de 64 bits no windows 10 (isso ocorre por conta da camada hypervysor, do Hyper-V), abaixo o tutorial

Fabrcio, boa tarde!
A preocupao no somente com relao ao licenciamento e sim com as vulnerabilidades atreladas ao XP, essa mquina Virtual precisa ser colocada no domnio REDE.SP para poder chegar ao ambiente do SISACOE.
A PRODAM est em fase final dos testes com a compatibilidade de de 32 bit do Windows 10 (e consecutivamente do Windows 7) para as mquinas que estiverem com Sistema Operacional em 64 bits.

Acho importante tentarmos sair do XP no melhor esforo, mas em matria de sustentabilidade, uma vez no tendo licenas, o XP Mode seria vlido.A alternativa seria subir uma soluo Unix emulando com Wine, mas enquanto isso no estiver pronto, a soluo XP Mode me parece razovel.

Para o SISACOE, especificamente, o time de microinformtica da PRODAM j finalizou os testes e funcionou bem.
O procedimento est em fase de finalizao e deveremos compartilhar at o final desta semana.

Esta soluo, provavelmente atender qualquer aplicao baseada em cliente Oracle, posso at arriscar que dever funcionar para qualquer aplicao 32 bit sobre Sistema Operacional 64 bits (para esta especificamente existe a necessidade de uma DLL adicional).

Acabou o problema da incompatibilidade com o SISACOE em Sistemas Operacionais 64 bits. A PRODAM est disponibilizando o roteiro passo-a-passo com os novos procedimentos, bem como as respectivas DLLs, que se encontram no link abaixo:

A guest operating system running in the emulated computer accesses thesedevices, and runs as if it were running on real hardware. For instance, you can passan ISO image as a parameter to QEMU, and the OS running in the emulated computerwill see a real CD-ROM inserted into a CD drive.

QEMU can emulate a great variety of hardware from ARM to Sparc, but Proxmox VE isonly concerned with 32 and 64 bits PC clone emulation, since it represents theoverwhelming majority of server hardware. The emulation of PC clones is also oneof the fastest due to the availability of processor extensions which greatlyspeed up QEMU when the emulated architecture is the same as the hostarchitecture.

The PC hardware emulated by QEMU includes a motherboard, network controllers,SCSI, IDE and SATA controllers, serial ports (the complete list can be seen inthe kvm(1) man page) all of them emulated in software. All these devicesare the exact software equivalent of existing hardware devices, and if the OSrunning in the guest has the proper drivers it will use the devices as if itwere running on real hardware. This allows QEMU to run unmodified operatingsystems.

This however has a performance cost, as running in software what was meant torun in hardware involves a lot of extra work for the host CPU. To mitigate this,QEMU can present to the guest operating system paravirtualized devices, wherethe guest OS recognizes it is running inside QEMU and cooperates with thehypervisor.

Generally speaking Proxmox VE tries to choose sane defaults for virtual machines(VM). Make sure you understand the meaning of the settings you change, as itcould incur a performance slowdown, or putting your data at risk.

When creating a virtual machine (VM), setting the proper Operating System(OS)allows Proxmox VE to optimize some low level parameters. For instance Windows OSexpect the BIOS clock to use the local time, while Unix based OS expect theBIOS clock to have the UTC time.

Additionally, the SCSI controller can be changed.If you plan to install the QEMU Guest Agent, or if your selected ISO imagealready ships and installs it automatically, you may want to tick the QEMUAgent box, which lets Proxmox VE know that it can use its features to show somemore information, and complete some actions (for example, shutdown orsnapshots) more intelligently.

Proxmox VE allows to boot VMs with different firmware and machine types, namelySeaBIOS and OVMF. In most cases you want to switch fromthe default SeaBIOS to OVMF only if you plan to usePCIe passthrough.

For Windows guests, the machine version is pinned during creation, becauseWindows is sensitive to changes in the virtual hardware - even between coldboots. For example, the enumeration of network devices might be different withdifferent machine versions. Other OSes like Linux can usually deal with suchchanges just fine. For those, the Latest machine version is used by default.This means that after a fresh start, the newest machine version supported by theQEMU binary is used (e.g. the newest machine version QEMU 8.1 supports isversion 8.1 for each machine type).

the IDE controller, has a design which goes back to the 1984 PC/AT diskcontroller. Even if this controller has been superseded by recent designs,each and every OS you can think of has support for it, making it a great choiceif you want to run an OS released before 2003. You can connect up to 4 deviceson this controller.

the SATA (Serial ATA) controller, dating from 2003, has a more moderndesign, allowing higher throughput and a greater number of devices to beconnected. You can connect up to 6 devices on this controller.

A SCSI controller of type VirtIO SCSI single and enabling theIO Thread setting for the attached disks isrecommended if you aim for performance. This is the default for newly createdLinux VMs since Proxmox VE 7.3. Each disk will have its own VirtIO SCSI controller,and QEMU will handle the disks IO in a dedicated thread. Linux distributionshave support for this controller since 2012, and FreeBSD since 2014. For WindowsOSes, you need to provide an extra ISO containing the drivers during theinstallation.

The VirtIO Block controller, often just called VirtIO or virtio-blk,is an older type of paravirtualized controller. It has been superseded by theVirtIO SCSI Controller, in terms of features.

On each controller you attach a number of emulated hard disks, which are backedby a file or a block device residing in the configured storage. The choice ofa storage type will determine the format of the hard disk image. Storages whichpresent block devices (LVM, ZFS, Ceph) will require the raw disk image format,whereas files based storages (Ext4, NFS, CIFS, GlusterFS) will let you to chooseeither the raw disk image format or the QEMU image format.

the raw disk image is a bit-to-bit image of a hard disk, similar to what you would get when executing the dd command on a block device in Linux. This format does not support thin provisioning or snapshots by itself, requiring cooperation from the storage layer for these tasks. It may, however, be up to 10% faster than the QEMU image format.
[See this benchmark for details _Khoa_Huynh_v3.pdf]

Setting the Cache mode of the hard drive will impact how the host system willnotify the guest systems of block write completions. The No cache defaultmeans that the guest system will be notified that a write is complete when eachblock reaches the physical storage write queue, ignoring the host page cache.This provides a good balance between safety and speed.

If you want the Proxmox VE storage replication mechanism to skip a disk when starting a replication job, you can set the Skip replication option on that disk.As of Proxmox VE 5.0, replication requires the disk images to be on a storage of typezfspool, so adding a disk image to other storages when the VM has replicationconfigured requires to skip replication for this disk image.

If you would like a drive to be presented to the guest as a solid-state driverather than a rotational hard disk, you can set the SSD emulation option onthat drive. There is no requirement that the underlying storage actually bebacked by SSDs; this feature can be used with physical media of any type.Note that SSD emulation is not supported on VirtIO Block drives.

The option IO Thread can only be used when using a disk with the VirtIOcontroller, or with the SCSI controller, when the emulated controller type isVirtIO SCSI single. With IO Thread enabled, QEMU creates one I/O thread perstorage controller rather than handling all I/O in the main event loop or vCPUthreads. One benefit is better work distribution and utilization of theunderlying storage. Another benefit is reduced latency (hangs) in the guest forvery I/O-intensive host workloads, since neither the main thread nor a vCPUthread can be blocked by disk I/O.

A CPU socket is a physical slot on a PC motherboard where you can plug a CPU.This CPU can then contain one or many cores, which are independentprocessing units. Whether you have a single CPU socket with 4 cores, or two CPUsockets with two cores is mostly irrelevant from a performance point of view.However some software licenses depend on the number of sockets a machine has,in that case it makes sense to set the number of sockets to what the licenseallows you.

With the cpuunits option, nowadays often called CPU shares or CPU weight, youcan control how much CPU time a VM gets compared to other running VMs. It is arelative weight which defaults to 100 (or 1024 if the host uses legacycgroup v1). If you increase this for a VM it will be prioritized by thescheduler in comparison to other VMs with lower weight.

Forcing a CPU affinity can make sense in certain cases but is accompanied byan increase in complexity and maintenance effort. For example, if you want toadd more VMs later or migrate VMs to nodes with fewer CPU cores. It can alsoeasily lead to asynchronous and therefore limited system performance if someCPUs are fully utilized while others are almost idle.

d3342ee215
Reply all
Reply to author
Forward
0 new messages