Yes I searched the forums, today my Admin page said there was an update to 6.10.10 now the Apps upload is broken "App installation is no longer supported with App Upload or Available Apps." WTF!
I'm assuming I'm not the only one who uses the ReadyNAS platform for Plex, has anyone. figured out how to update Plex in the post 6.10.10 world?
Thank you
My RN204 couldn't run plex decently. So it's just used for photo/kids data backup now. My Plex is hosted on a AMD 2700x with a 32tb raid array and an older nvidia 1060 for transcoding. Streams 4k without any issues. A bit overkill and the next upgrade will be for an intel for the quicksync capability. I guess its pretty rock solid with plex transcoding
I got the firmware fixed, but it completely screwed up my NAS. It uninstalled DOCKER and so my Docker apps no longer are running and trying to install Docker fails!!! I am so mad. I've lost so much data because of Netgear!!! Completly uncalled for. I don't know what to do at this point.
Ok after downgrading the firmware to 6.10.9. I went to the link to pointed out and message 37. Then I edited the created the files needed easily using the Windows app "WinSCP". It makes things a snap. Docker CLI was then able to install once again.
Though my Docker apps were not running and I guess those containers gone? I was able to pull updated versions, and create the containers once again and all my settings were still there. I'm once again back up and running. I got those 4 docker apps running once again.
Still, a firmware update should specifically warn you that it is no longer allowing you to install any new apps and in fact is automatically uninstalling software. Why is that? First off, this shouldn't happen in the first place. Let alone no clear big box WARNING on what is going to happen installing this firmware.
It's firmware to make your ReadyNAS almost worthless. What is next for 6.10.11. firmware? Deleting all your files and locking you out completely? At least we have some help here to fix things. That is a big PLUS. But it should never have been done in the first place.
Updates are a process to fix problems, not to kill a product line. If there's not a easy/quick fix to this problem, it will be a 10k problem to replace 2 x RN516. Maybe a lawsuit or maybe common sense will prevail! Or dose anyone have another fix to issue. Thanks
I laugh at the millenials and younger... you fully believe you must always buy the latest and greatest things... and you wonder why you have no money? I'm sorry but until the devices just quit on their own, they will have their uses. I don't need to transcode 4k. 1080 HD/SD is fine for me. If I want perfect quality, I'll buy a BRD. There is NO way I'm buying more HDD and desktops and laptops and all the crud they force you to buy when they deliberately make older things obsolete. I also don't stream ANYTHING and you are foolish to pay for any streaming service. BUY your music then you OWN it. Nobody can take it away because of some licensing squabble with an streaming provider lol... sorry I'm just as annoyed at NETGEAR as everyone else. Guess I won't be buying any of their products also... tired of having to replace them every few years for no reason at all other than they won't make anything backward compatible. Peace out.
I'm absolutely shocked they locked out the manual upload for apps. I understand why they want to shut down the online app service, but to hamstring the manual uploads in the process is just ridiculous customer treatment.
I was already unhappy that NetGear is dropping the NAS products, and was expecting to have to go to a competitor in due time... but to then insult us with this latest firmware is just the cherry on top. NetGear is now on my bad list.
Other than for updates that don't need updated dependencies from the Debian distribution, uploads would already fail unless you made changes via SSH that potentially (though, IMHO remote and an acceptable risk for a home system) open up the NAS for malicious content. So instead of it giving an error and the operator not knowing why, they did this. So, it is actually understandable, even if you don't agree it's the best way to do it. Netgear is certainly never going to make a change that allows unverified content to be loaded, as that could open them up to litigation.
What doesn't make sense to me is to also cause it to not be able to load apps via SSH, where you'd have to knowingly make and accept those changes. Even worse is that it seems to affect some already-installed apps, like Docker, though that may have been inadvertent due to insufficient testing (which Netgear has always been guilty of).
I'm submitting this in the hope that it gets fixed even though I suspect that it may be the result of the macOS 12.3.1 update. It actually started happening well before rc8 was released. Last successful backup was on April 30th. Now I can create a new share for time machine backups and the first backup will be successful. Subsequent backups will fail with an error in time machine "Failed to attach using DiskImages2".
**Update - I was able to mount the TimeMachine share manually in finder using smb://nicknas2.homenet (DNS domain on my router). It had previously been using nicknas2.local which doesn't appear to be working anymore. After manually connecting to the share and re-associating the disk in TimeMachine, it seems to have started trying to back up. Fingers crossed...
Yes multiple successful backups including a large one after installing 12.4. Nothing extra in my SMB options. I can confirm though I cannot browse through Finder for my Unraid shares. I have to use the Go menu to manually connect to them. Also, .local is not working anymore for some reason so I am using the DNS name configured on my router to reach the server. I manually connected to the TimeMachine share and then added the disk. Once I did that, it can find it again each time. My TimeMachine share is set to private but not hidden.
I have to use the Go menu to manually connect to them. Also, .local is not working anymore for some reason so I am using the DNS name configured on my router to reach the server. I manually connected to the TimeMachine share and then added the disk. Once I did that, it can find it again each time. My TimeMachine share is set to private but not hidden.
Thanks. It doesn't appear that your issue was related to the one posted here since yours was a connection problem rather than failing to mount the sparsebundle image. I will try deleting my mac configurations in smb extras and see if that fixes my issue.
I ran into this issue. I ended up deleting ALL local snapshots, which appears to be working (running right now). Unfortunately, I had deleted all existing Time Machine backups on my Unraid box before I attempted this. My guess is that Unraid is not at fault here, but I'm posting in case others run into this issue. My suggestion would be to delete all local snapshots and then see if it begins working.
Just wanted to say I had this issue on both Intel and M1 Macs. Time Machine backups to Unraid were going swimmingly, with the Unraid share hosting an APFS sparseimage. I'm not sure if it was the Unraid 6.10 update or MacOS 12.3/12.4 or just bad timing, but then all the computers stopped being able to back up a month or so ago. I was able to make first (non-incremental) backups without issue, but subsequent incremental backups after that did not work. I had the same MacOS console issues as mentioned above: "Operation not supported by device" UserInfo=DIErrorVerboseInfo=Failed to initialize IO manager: Failed opening folder for entries reading
I had tried deleting the local snapshots on each Mac as well, to no avail. For reference, that's "tmutil listlocalsnapshots" to find the snapshot names and then "tmutil deletelocalsnapshots " to actually delete them.
Anyway, I applied the SMB Extra Config settings as mentioned above, and then re-attempted the incremental backups. Success! (Update: Time Machine now works on two of the three Macs. The last one still gets the "Failed opening folder for entries reading" error) The following SMB Extras config seems to have done the trick:
I did browse through the documentation pages to understand what these were doing -html/vfs_fruit.8.html -html/smb.conf.5.html and the only callout I have is that the zero_file_id setting is yes by default, and may not be necessary. The rest of the options check out, though I'm not sure why this combination of options makes Time Machine work. I saw another post say that just setting two of these worked:
As a sidenote, I DID see that _6/Configuring_Apple_Time_Machine recommends having a separate share per computer, but that seems to be outdated (bad?) advice and perhaps should be changed. I've got three Macs backing up to the same Time Machine share with no issues.
It's pretty obvious that Apple made changes in the SMB implementation of macOS that seem to cause these issues.
Its not Unraids nor Sambas fault. My temporary solution is to run a Netatalk/AFP server via a VM that gets the TimeMachine share passed through. With this, the incremental Time Machine backup works properly again. Also the already existing backups which were transferred before remain since only the network protocol is changed.
I'm using Fedora since its using the latest version of netatalk which has some security fixes.
This is only a temporary solution since AFP is insecure and was depreciated by Apple years ago but for home usage it's fine I guess.
Maybe there are some incompatibilities between Apples latest changes in their SMB stack and the different Samba versions of Unraid, that would be my first guess. In my statement I referred to postings in many different forums (Apple, Synology, iX Systems, etc), whose TM users all describe the same problem so I highly doubt that its a problem with Unraid. Most likely Apple is doing is own thing again, wouldn't be the first time.
Yeah, this could definitely be the case. I agree completely that Apple could be going off the rails in how things are built. But considering upgrading the server causes the breakage while the Apple device stays the same, and the Time Machine functionality in Unraid is built exclusively for Apple devices, I would have hoped that Limetech could have pinned the Samba version on something functional (or at least tested it in 6.10 to a thorough degree).
b37509886e