- Ctrl+Shit+T (terminal from the OpenXT UI, root/"password you setup during the installation")
- newrole -r sysadm_r
- xec set enable-dom0-networking true
- nr #alias for newrole-to-admin.
- xec set enable-ssh true
Hello,First of all: Nice job guys!The install wasn't easy, but some Google-Fu helped so far. Used Rufus in DD mode, to be sure.Version 715 installed, with Measured Launched, LAN, Wifi, print reader, webcam, DVD, audio, and the "HP integrated Module" recognized.Just, for those trying the same hardware: the BIOS needs to be updated. I used the last one (F61 pass 2, you can find it in the HP website, only under Win7). If you have troubles with the TPM, go back to the BIOS, clear the TPM (after disabling the TXT), then re-enable TXT and TPM. And use the Alt-Consoles to find why it's not working. Also, I had to manually format the SSD I'm using (it's a innodisk SED, but whatever, did it too for a non-SED SSD).Note that you will not have access to the i7 GPU, since ... it's not connected at all. Do not look in the BIOS a switch.
I tried the RELEASE (v6), but got stuck in the install (do not remember where), then I switch to the v7. Since I want to experiment with it, that's fine. Unless someone tells me otherwise, perhaps from the following remarks.
- there are no ways to pass a USB key to the VM? Or at least emulate a CD/DVD with an ISO?
- local password disabled? No screen locks?
- VPN unavailable? No idea where to setup a VPNVM ...
- NDVM per NIC? I see what looks like a single VM for both NIC.
Then, for SSH, I had to:
- Ctrl+Shit+T (terminal from the OpenXT UI, root/"password you setup during the installation")
- newrole -r sysadm_r
- xec set enable-dom0-networking true
Should be more usable to activate SSH.And not sure about the difference with:
- nr #alias for newrole-to-admin.
- xec set enable-ssh true
But now I have a SSH console to dom0 (only via Ethernet, BTW ...).But no access to /storage (permissions?). I suppose it's not supposed to be so difficult to upload a file to dom0?Should I add a LVM volume?Can I add an external drive, at least, as a repository?
Finally, what would you do when their is not Intel eGPU available? I think I saw something related to the driver in the console during install.
Can we try to use a Nvidia driver for OpenXT? Or at least pass through the MXM module to one of the VM?
Does this laptop have both an Intel integrated GPU and an Nvidia GPU, with Optimus for switching? OpenXT only supports shared access to the Intel integrated GPU and dedicated (passthrough) access to discrete GPUs (usually a PCI card on a desktop, not a secondary GPU on a laptop).
This laptop has a Sandy Bridge CPU? You may be more successful with an Ivy Bridge, Haswell, Broadwell or Skylake laptop that has only an Intel integrated GPU, no secondary GPU.
I tried the RELEASE (v6), but got stuck in the install (do not remember where), then I switch to the v7. Since I want to experiment with it, that's fine. Unless someone tells me otherwise, perhaps from the following remarks.
- there are no ways to pass a USB key to the VM? Or at least emulate a CD/DVD with an ISO?
On the stable (6.0) release, you should be able to attach USB devices to Linux and Windows VMs which have guest tools (PV drivers) installed. There are options in the UIVM (user interface VM, a.k.a. local GUI management console) under VM properties. ISOs can be copied to /storage/iso in dom0, then can be selected in the UIVM from VM properties, to mount the ISO as a virtual optical drive in the guest.
- local password disabled? No screen locks?
This is an unfortunate, open bug:
- VPN unavailable? No idea where to setup a VPNVM ...
Some old instructions are available in Section 3.3 (NILF, Network In-Line Filter) of the Developer guide, but may not apply to recent versions of OpenXT:
- NDVM per NIC? I see what looks like a single VM for both NIC.
See Section 5.4 of the Administrator guide:
Then, for SSH, I had to:
- Ctrl+Shit+T (terminal from the OpenXT UI, root/"password you setup during the installation")
- newrole -r sysadm_r
- xec set enable-dom0-networking true
Should be more usable to activate SSH.And not sure about the difference with:
- nr #alias for newrole-to-admin.
- xec set enable-ssh true
I'm not 100% sure, but the first command likely connects a virtual network interface to dom0, while the second command likely enables the SSH server in dom0.
But now I have a SSH console to dom0 (only via Ethernet, BTW ...).But no access to /storage (permissions?). I suppose it's not supposed to be so difficult to upload a file to dom0?Should I add a LVM volume?Can I add an external drive, at least, as a repository?SE Linux mandatory access control will limit the operations that are allowed in dom0. It can be disabled temporarily. You can also check /var/log/messages for error msgs from SE Linux.You can mount external drives or LVM volumes manually. If you want the mount to persist beyond reboot, you will need to modify /etc/fstab, which is on a read-only partition that is also measured at boot. You can remount the root filesystem read-write to make the modification. At the next boot, measured launch will indicate that the filesystem has changed, and you can approve the change after entering the root password.
Finally, what would you do when there is no Intel iGPU available? I think I saw something related to the driver in the console during install.
This is not easy to diagnose remotely. There may be clues in /var/log/. A more modern vPro laptop, with only an Intel GPU, would have a better chance of working. Other people on the list may have tips on debugging the problem with the Intel GPU.
Can we try to use a Nvidia driver for OpenXT? Or at least pass through the MXM module to one of the VM?Nvidia GPUs don't typically work on laptops with OpenXT. Even on desktops, Nvidia disables (at the level of in-guest drivers) support for GPU passthrough, except on higher end video cards. Your best bet is a laptop with only an Intel GPU.
GPU Pass-through with
- Citrix XenDesktop 5.6 FP1 Enterprise/Platinum, XenServer 6 Enterprise / Platinum editions
- VMware Horizon View 5.2 and vSphere 5.1
Rich
nr
setenforce 0
rw
setenforce 1
ro
Some thoughts, catching up on that...
On Tue, Mar 21, 2017 at 8:02 PM, Jean-Philippe Oudet <jp.o...@gmail.com> wrote:
> On Tuesday, March 21, 2017 at 5:34:45 PM UTC-4, Rich Persaud wrote:
>> Does this laptop have both an Intel integrated GPU and an Nvidia GPU, with
>> Optimus for switching? OpenXT only supports shared access to the Intel
>> integrated GPU and dedicated (passthrough) access to discrete GPUs (usually
>> a PCI card on a desktop, not a secondary GPU on a laptop).
>
> Yes and no, the i7-2820 has a iGPU, but AFAIK it's completely deactivated
> for the MXM module, a Nvidia Quadro (1000M I think).
> No Optimus. Anyways, who made this work IRL??? ;-)
>
> I believe OpenXT is currently displaying the UI on the Nvidia GPU, probably
> with a software renderer, as it's often the case. The resolution is not
> good. Perhaps 1024x728?
That seems right. In this case, having the Nvidia as primary GPU will
have Surfman use fbdev copy plugin as a fallback (usual for Intel GPU
is drm-plugin). I believe there is a ticket in to restore the DRM copy
fallback logic, which should handle larger screen definition. There is
no plan to attend this one soon though.
> So, essentially, you are saying OpenXT is sharing everything on the iGPU,
> but you can dedicate an external GPU to a VM and process heavy stuff on it
> but only within this VM? I read something like that.
A "default" configuration would be UIVM and guests using PV drivers or
the emulated stdvga from QEMU being displayed by Surfman's DRM plugin
on the Intel GPU. Any guest can be assigned another PCI GPU and use it
on its own, with other guests still being displayed on the Intel GPU.
> But bottom line, what I have today is normal and expected. OpenXT doesn't
> work on a shared Nvidia GPU?
If I understood correctly, that is right.
> Sure, but for the moment, I work with this laptop.
Firmware usually offers a configuration to decide to either use the
Intel GPU or the discrete one as primary. Having the Intel as primary
should allow you to pass-through the Nvidia one to a guest.
> Still, their is this mention in the docs: "Multiple GPU and seamless mouse
> support, allowing you to run multiple 3D Graphics Support VMs on separate
> monitors". I want to understand that better.
The use case I know about that matches this description is having 1+
GPU passed through to 1+ guest with pv-tools installed. PV-tools will
allow for the mouse pointer to move from one guest, with GPU passed
through, to another (screen position can be configured through the
UIVM on the Intel GPU). So that is 1 guest <-> 1 GPU and whatever is
displayed on the integrated Intel GPU (UIVM or guests using QEMU's
stdvga/pv-drivers).
> Yeah, I saw it. But I do not have access to /storage (see below). So it
> looks like another bug of the v.7?
Using the "nr" alias (to "newrole -r sysadm_r") should give you access
to /storage, which is not read-only. Default place for vhds is
/storage/disks and isos /storage/isos, the UI and toolstack should
pick up on that. You might want to look at /config/vms/<uuid>.db file
to change disks paths and run "killall -HUP dbd" to update the db for
the toolstack to use.
> And to attach a USB drive to a VM, you need it to exist (and install the
> tools) ... catch-22!
:D. Plugin a USB key to a guest with tools, while the guest is
displayed, should assign that USB to the guest unless enforced
otherwise? I have not played a lot with that, so that statement is not
reliable. Anyway you should have access to /storage once admin.
>> And not sure about the difference with:
>>> nr #alias for newrole-to-admin.
Switches to role sysadm_r. (alias to newrole -r sysadm_r)
>>> xec set enable-ssh true
That enables sshd in dom0.
> Virtual NIC? My goal was to connect the real eth0.
By default, NDVM is started with NICs passed-through. Dom0 only has a
xennet pv frontend used, for example, for sshd.
>> But now I have a SSH console to dom0 (only via Ethernet, BTW ...).
>> But no access to /storage (permissions?). I suppose it's not supposed to
>> be so difficult to upload a file to dom0?
>> Should I add a LVM volume?
That might be difficult with only one disk. I am under the assumption
the installer will use the entire disk at installation (having
"whatever is free" for /storage). You should be able to shrink
xenclient-storage and add a volume to the VG though if really
necessary.
> So ... Basically it's "normal behaviour" not to present the storage mount to
> the UI?
The use of the terminal is not considered default behavior in enforced
mode. Presumably everything should be doable through the UI.
>> Finally, what would you do when there is no Intel iGPU available? I think
>> I saw something related to the driver in the console during install.
Headless OpenXT was mentioned at some point. AFAIK this is not supported yet.
Otherwise, Surfman will fallback to the fbdev plugin to provide some form of UI.
> But I also saw plenty other Elitebook supported in the list. I doubt they
> are all so different than mine.
Devil's in the details unfortunately... There should be an HP
Elitebook I can get my hands on, and will be looking at GPU
pass-through issues. Without any commitment, if I have the time I will
give it a spin.
> Again, that's strange. SecureView is supposed to allow several VM displayed
> from several GPU. If the hypervisor is not designed to allow that, I missing
> something.
Someone familiar with SV can likely answer that. It is my
understanding that SV uses a very different graphic stack (not
Surfman).
--
Eric CHANUDET
On Fri, Mar 24, 2017 at 9:53 AM, Jean-Philippe Oudet <jp.o...@gmail.com> wrote:
> On Thursday, March 23, 2017 at 12:29:27 PM UTC-4, Eric Chanudet wrote:
>> Surfman use fbdev copy plugin as a fallback (usual for Intel GPU
>> is drm-plugin).
> What's Surfman? The GUI? [Sorry, but I have a long way to go for the
> technicalities.]
My bad, I should have explained that further.
OpenXT uses a daemon called Surfman, in dom0, to display VMs on
screen. For legacy reasons, Surfman can use various plugins to handle
displaying through different means. The default plugin is "drm-plugin"
which uses the DRM API, the Intel sub-component of DRM (i915)
specifically, to create framebuffers and have the Intel GPU display
them. The only other plugin that is shipped is "linux-fallback" which
uses the fbdev interface instead. It is chosen by Surfman when no
compatible DRM device is found by the DRM plugin.
> Now, I do not understand the impact on security doing that. Are you still
> maintaining the isolation between the VMs?
Each guest will have a QEMU stdvga emulated card and will use that.
Surfman & plugins will get the drawn framebuffer from the emulated
card in QEMU and put it on screen.
> What about doing that with any GPU?
The current logic is using a modified Intel/DRM behavior in dom0 to
have the GPU scan the framebuffer from the emulation. This has
drawbacks, one of which being it is only usable on i915/DRM (Intel
integrated) devices.
There are other approaches and solutions to support a wider range of
graphics and talks during community calls have brought out Surfman's
limitations. There will likely have involvement to replace it for a
more flexible approach. Hopefully in a near future? :)
> But as I said, in my case, the Adapter+Mouse Switching preferences are not
> displayed in the UI. If that's what you're telling me.
Yes. That sounds like a bug. I believe this should be, starting in
UIVM upper-right "Settings", "Display", "Display Adapter" and "Mouse
Switching" tab.
> I found the explanation for the SELinux enforcement limiting the write
> access to the /storage folder. Once I deactivated it, everything worked.
> I questioning this design choice, especially when I see you can manage the
> permission for an "sys admin" and a "user".
Having a role, even optional, for image/disk management (access to
/storage & relevant) seems a valid suggestion to me.
> Yeah, hummm ... my question was more: what "xec set enable-dom0-networking
> true" does that "xec set enable-ssh true" does not, to enable SSH?
The first one should do what's necessary to get the PV interface in
dom0 accessible from the outside world (e.g. talk to the toolstack so
NDVM handle its backend). The second only starts sshd on that
interface.
> It is (clearly?) not possible to upload any kind of file through the UI,
> AFAIK... Did I miss a button or something?
I don't think you missed anything. This is done remotely through
syncui/vm, someone who is more familiar with these will have to shim
in though.
--
Eric CHANUDET
On Thu, Mar 30, 2017 at 5:09 PM, Jean-Philippe Oudet <jp.o...@gmail.com> wrote:
> About that, I found this page: http://openxt.org/summit/. Is it possible to
> share the presentations? (And make it open/available to the public in the
> future?)
I could not find them anywhere. Rich do you know if that has been made
available?
> So, Surfman is like a virtual "multiviewer"? He takes all the FB
> simultaneously, mix them (or pass each one sequentially when the "seamless"
> option is not enabled), and then push the result to the DRM/OGL stack. Am I
> correct?
Surfman with drm-plugin points the Intel graphic card to the pages
where the guest framebuffer is and configures it to scan the expected
geometry. So only one guest on the shared screen at a time,
multi-screen being only achieved by passing-through another GPU to a
guest.
> I suppose the USB commands are also redirected from dom0.
Yes, vusb-daemon in dom0 will use udev to be notified of new devices.
Then when asked to, it will load the vusb backend driver for the
requested device and do PV backend to frontend. Someone who worked on
the back/frontend could do a more detailed explanation I am sure.
> If TXT is really a must (as I understand you can make it work on a non-vPro
> CPU?)
VT-d is required, TXT can be disabled if you do not intend to do measurement.
> Last concern: for each of those models, will we be able to pass the discreet
> GPU to a VM, while using the integrated one for the others?
If I understand correctly, it should be ok ;), minus possible pending
issues with PCI GPU pass-through.
> I have a lot of other problems too:
> Debian (Jessie) stuck at a black screen, no means to open a console.
> Tried nomodeset during install. No luck.
Have you tried adding acpi=off to the kernel cmdline? There was a
known bug... OXT-808 if I am correct.
> Is it possible to bochs_drm is doing that?
Yes. The DISPI implementation on OpenXT QEMU is incomplete I believe,
which triggers an error with bochs_drm. I seem to remember there is a
ticket about that, although there was a work-around? I will have a
look...
--
Eric CHANUDET
On Thu, Mar 30, 2017 at 5:09 PM, Jean-Philippe Oudet <jp.o...@gmail.com> wrote:About that, I found this page: http://openxt.org/summit/. Is it possible toshare the presentations? (And make it open/available to the public in thefuture?)I could not find them anywhere. Rich do you know if that has been made
available?
Also, Linuxes install are not easy, because it take only the half left of the screen, with the content warped. So it difficult to read. But that's certainely related to the driver. After that, I can have the VM in full screen, good resolution. But I did not validate the Nvidia GPU is seen (probably not, since it is taken over by OpenXT and replaced by its virtual GPU).Now, I tried to install Windows 8.1 Pro. No luck with the Tools. I added the certificates, that helped. But even with them, installed as an Admin, and with all my devotion and googling, I failed. The VM is up and running, but the tools are not recognized by OpenXT.Any advices?
I'll delete and retry the complete install another time, see if that's help.
> Is it possible to bochs_drm is doing that?
Yes. The DISPI implementation on OpenXT QEMU is incomplete I believe,
which triggers an error with bochs_drm. I seem to remember there is a
ticket about that, although there was a work-around? I will have a
look...Oh boy, I need schematics and boxes/arrows diagrams ... ;-)For the driver, I can validate that helps. So, still a valid concern. But it seems this is already discussed in this group. I'll try to keep up with what you're doing, then.