Duplicates

34 views
Skip to first unread message

CityguyUSA

unread,
Mar 13, 2021, 11:38:15 PM3/13/21
to SuperSorter Chrome extension
 I'm convinced the SuperSorter is not so super anymore.  The duplicate processing of the entire list of bookmarks is no longer functioning as best I can tell and has failed under Edge.  It most likely has failed under the other chromium browsers as well.

Just remember as more and more software fails we get closer to the entire digital age crumbling into a giant abend.  I already have stories of my DTV service failing so bad that 5 days of conversations hadn't been able to diagnose and resolve an HDMI incompatibility.  The impact it had on my account was so bad I had to stop paying because they refused to talk to me about terminating my account.  The software is so whacked that it refuses to function as expected.  Will they finally realize that they have to test everything?  Or is it too expensive to repair at the current stage of quality?  How or who will maintain  robots and self-driving cars?  Donation coders or minimum wage workers that have no other options?

A few years back there was a problem with the Thunderbird addon for the calendar function called Lightning.  I hadn't realized that it had stopped notifying me of events costing me several hundred dollars.  That wasn't bad enough but the error was not resolved for almost 2 years because they rely on donation coders that do what they want.  There was no way to triage a major function of the software because no one wanted to work on it.  Of course I had to abandon the software that I had used for years because I had to be able to depend on the core functions always working.

I'm not a big believer in AI because it has to be coded by people that are poorly trained on how to code & test let alone do the analysis.  Even if we had the best testers in the world we're still human and prone to failures.  AI has to surpass human capability and know when it's failing to correct it's failures.  If people can't do that how does one expect a machine to teach it's self right and wrong?  How do you explain to a computer a moral right and a moral wrong?  We can't do that with all the legislation and courts in the world yet we expect machines to achieve success even when the firmware is developed by people?  How long until a fallacy becomes replicated across the AI scheme and starts to snowball making more and more critical decisions faster than anyone can even shut down the system?  Has anyone even built a switch into the system to be able to shut it down?  Google's software certainly is full of flaws and yet they're the ones running the "captchas" that have forced me to say twice now that a mailbox was a parking meter and that a RV is a bus.
Reply all
Reply to author
Forward
0 new messages