Ihave jumped around for some time now to solve this, and I cannot seem to get it working. I have a docker container where I set up an nvidia image for machine learning. I install all python dependencies. I then start with the pip package installations. I get the first error:
Simple enough I have a certificate to deal with Cisco umbrella. I can then install all packages nice and easy. However to be able to install newest packages I need to upgrade pip, and upgrading works fine. After pip is upgraded to 20.2.3 I suddenly get an error again:
I am kind of at a loss right now, installing the certificate in the container allows me to install packages with pip 9.0.1 (default in the system) after upgrading to pip 20.2.3. I cannot get it to work with any package. I have tried multiple pip versions - but as soon as I upgrade I lose the certificate trying to reinstall it with
The steps suggested in the answer and in my question are definitely what one should try. If someone cannot make it working, like me, then in this specific instance it was the IT organisation who had set the information to be proxied to umbrella, and it didn't supprt the ssl scanning/decryption.
As you're adding Cisco_Umbrella_Root_CA.cer you ARE proxying through a corporate proxy. See Cisco Umbrella Root Certificate otherwise there is no need to add that cert. The "tested it on my private PC without any issues" tells you that it's environmental.
You can always run docker run -it nvidia/cuda:10.0-cudnn7-runtime-ubuntu18.04 to get a shell on the container and then start running the commands in the Dockerfile by hand. When things break fall back to linux troubleshooting. You're in a ubuntu like environment after all
This seems to be either a problem concerning your certificate (old or invalid) or your (probably not updated) pip version.There's a link below regarding a conversation targeting the same (or similar) problem.I hope, I could help...
Edit: It looks like it is saying that the Fedora updates repo currently has mesa-filesystem-23.3 and it is trying to upgrade to that. But the third-party repo (whose package is currently installed on the @System) still has mesa-filesystem-23.2.
My method of dealing with this kind of situations - I keep updating normally, 3party stuff is not upgraded and then later 3party packages (mesa in this case) get updates and there are no more warnings. So far nothing bad happen with having couple not upgraded packages. 3party repos are a bit behind (few days-a week) official packages and it is sufficient to simply wait until maintainers will build new versions.
When doing a system-upgrade the very first prompt before the download will begin is to do the full dnf --refresh upgrade step. This is the stage where users should ensure their system is fully up to date with all installed packages. The upgrade of the current installation should be completed with no errors before starting the system-upgrade to avoid other unwanted side effects.
As Villy suggests, removing the freeworld packages may break functionality so it is a better suggestion to wait the (usually) short time for the packages in the 3rd party repos to be updated and the regular update to be successfully completed before doing the system-upgrade.
Removal of those packages is certainly possible but loss of functionality often has unwanted or unanticipated side effects so waiting is a better choice for 99% of users. A system-upgrade is normally not time critical.
Thanks for the information. Is there a way to achieve the same result on Fedora Atomic desktops? Currently, to allow a new system update and keep mesa-*-freeworld from rpmfusion, I use rpm-ostree override replace , which does not seem to be an optimal solution.
Everything is working well, with all the security and optimisation. But the tutorial is based on WordPress 4.5.2. And when everything is setup I cannot upgrade WordPress because of some permission denied. Of course I modified by myself the Dockerfile to get WordPress 4.7 but I will not be able to do the future upgrade.
The problem, as the error description indicates, is permissions. Based on that tutorial, nginx is running under the www-data user, but WP folders are owned by deployer. If you change the ownership of your $WP_ROOT to www-data:www-data, you will find that you can update your WP. I'm not amazing at security, so there may be a better way, but this one will work to get it updated. In the comment of the tutorial, they made this security change on purpose, so maybe it's not a good idea. Not sure on that level of detail.
On top of this change, this update will not persist if you power down the docker instance. You'll have to update your WP instance each time you restart the container. It might make sense to put the WP files into a volume so that WP can update the files as necessary, and that they persist. This would make sense also since the DB persists, and is part of the upgrade anyway. But these decisions on security are above my pay-grade.
Thanks for this. I deleted my extensions by removing .local/share/gnome-shell. Rebooted and re-installed gnome-extensions-app. Said extensions app does work, FYI, though it cannot discover, add or update anything.
I purchased a PODxt used from a music store this week. I was able to successfully register it at
line6.com after making an account there. Per instructions I found on the site to upgrade the firmware, I downloaded and installed Line 6 Monkey on my Windows 7 SP1 computer. After I installed the drivers for the PODxt Monkey was able to see it. As you can see from the attached screen shot, Drivers, USB Firmware, Line 6 License Manager and Line 6 Monkey are installed with the latest versions. But Flash Memory has a question mark and the Update Selection button is grayed out. I also ran License Manager and authorized the PODxt and my computer.
Since the goal of all this was to upgrade the Flash Memory (firmware) to version 3.01 I'm pretty frustrated and would appreciate any help. Since I play guitar and bass I was very interested to read that if you have version 3.01 you can install the Bass Expansion Model Pack. I will probably do that later as it is $99. I only paid $99.99 for the PODxt so will take some time to get used to the basic functions with guitar before moving on to bass.
I know I'm a beginner here but since the screen shot shows that Monkey can see the PODxt and can see what is installed and the version(s) and it shows that the USB firmware installed is the same as the latest I think that indicates my computer is communicating with the PODxt.
I am new to Archlinux and have just recently installed it. I have a problem with getting a pdf reader though. I am trying to install Evince and when I give the command pacman -Ss evince I get the following respond:
No problem there. But when I try to install with pacman -S evince it plainly claims that it cannot find the file evince-2.28.1-1-i686.pkg.tar.gz from neither of the six repositories I use. But I wonder, since it is actually found with the -Ss flag it should be there, shouldn't it?
OR at the blinker error
ERROR: Cannot uninstall 'blinker'. It is a distutils installed project and thus we cannot accurately determine which files belong to it which would lead to only a partial uninstall.
The setup code tests if the venv dir is present and creates it if it is not present. So I would recommend that you try deleting the directory (or just moving it somewhere else out of the way) and running the setup again.
Hi. i am currently using a macbook pro (mid 2012 model pre retina) and have just upgraded the ram to 16gb and replaced the hard drive with ssd. I have installed High Sierra into it but when i tried installing the updates, like the macOS update combo and other updates, it always shows THE VOLUME DOES NOT MEET THE REQUIREMENTS FOR THIS UPDATE. What does it mean and what do i need to do?
Please help, right now I have a HP Omen computer and can't even re-install Windows due to these BSOD-errors! I've tried to look in the forum, but didn't found any solution. If I don't get this solved, this is the last time I'm buying a HP PC. Thanks for your time and I hope one/some of you can and will help, thanks!
Seeing you can't do any kind of operating system recovery adds weight to a bad device like the system disk or memory, but it could be any component connected to the MB. It could also be a buggy peripheral connected to the PC.
UPDATE: I now ran all the diagnostic tests (F2 BIOS) including the long tests - and I think you're right as it turned out one of the 2 8GB RAM blocks failed - first with both 2x8 GB RAM inserted. Next with the failed RAM-block removed. Next only with the failed RAM-block inserted. All 3 tests are consistent so I'm going to order new 8 GB RAM blocks. I think the problem has now been identified and I've therefore marked your answer as solution, thanks a lot @Bill_To !!!
UPDATE #2: Now installed Windows 10 via the HP cloud restoration method... Annoying that I have to go through a gazillion upgrades to get Windows 11, but at least it works without the corrupt memory module, thanks!
What is your reasoning for compiling your own firmware? If you're simply trying to create images that have a specific combination of packages (+ some, - others), you can use the image builder and avoid all these hassles.
If you're making more significant changes that actually involve real code changes, this problem related to the fact that the hashes that are used to determine the dependencies and such are no longer the same as would be true from the standard release download.
My reason is that I want the size of the image to take the entire SD card instead of taking a small portion of the SD. I want to be able to install additional packages and don't want to run out of space.
3a8082e126