SoI bought an SSD to upgrade the speed of my PC, and started transferring extranneous files over to a 10TB portable USB drive that I had. In this effort to reduce the size of my hard drive before trying to clone it to the SSD to make it bootable, I used the program SpaceSniffer to see if there's any other places on my hard drive that I could pull data from to reduce the size further, and to make my new SSD as "clean" as possible before the cloning.
Which is actually another "clone" of an even older terabyte hard drive that I'd upgraded from previously, and completely forgot about. Here's the thing though, I already have "Show Hidden Files and Folders" checked, and I'm able to see hidden files and folders. However, I am NOT able to see this folder for some reason:
Has anyone ever heard of this before, or can explain what's going on here? Also note the "read only" checkbox having a square in it, and the hidden checkbox being checked AND disabled in the very first screenshot above.
This seems to be a "super" hidden folder for some reason. And having used Windows since the 3.1 days, I've NEVER seen anything like this before. So, do any bigger Windows experts have any explanation for the strange behavior of this folder?
After research, I found out that Robocopy has a bug where it altersthe attributes of the directory to a system and hidden file. It's beenknown for quite some time and there are numerous blogs and questionsout there on the subject. In order to reverse this change I had toalter the attribute so that the folder should no longer be marked aseither a system or a hidden file. These attributes are altered in cmdusing the parameters 's' and 'h' respectively
If you're clever and don't want to deal with the problem in the firstplace (as it's a known bug with Robocopy), when you write out yourRobocopy line in cmd, include the minus attributes (/A-:SH) at the endof your command. This will stop the destination folder from beinghidden and from being turned into a system file, thus preventing theproblem ever happening in the first place:
Have you ever wished you could watch your favorite videos in high definition, even if they were originally recorded in lower quality? Well, now you can, thanks to a new feature we are experimenting with in Edge Canary: video super resolution (VSR).
Video super resolution uses machine learning to enhance the quality of video viewed in Microsoft Edge by using graphics card agnostic algorithms to remove blocky compression artifacts and upscale the video resolution, so you can enjoy crisp and clear videos on YouTube and other video streaming platforms without sacrificing bandwidth.
Video super resolution is automatically enabled by Edge and indicated by an HD icon on the address. The feature can be computationally intensive, so this icon allows a user to be in full control of enabling or disabling the feature.
Turning off video super resolution persists for all sites in Edge preferences. This will also restore any video super resolution features supported by the graphics card driver on your system. To change your preferences for the feature or provide feedback, you can find the toggle and the feedback buttons by navigating to edge://settings/system in your Edge browser.
Inference GPU time needs to be under 16ms for 60 FPS (1000ms/60frames) videos and under 33ms for 30fps video. To scale to higher resolutions and frame rate on this GPU, we have a separate, lighter model that we use for resolutions >360p.
This lighter model shines at on screen text and improves sharpness and readability. The model is 10 times smaller than our heavier model and should allow us to improve a larger set of video resolutions and give us an opportunity to enable video super resolution on lower end devices in the future. The policy to switch between models and picking the right model for the device, video resolution, framerate is something that we are actively iterating on.
Existing video super-resolution models aim to learn a mapping function that predicts high-quality (HQ) frames from their low-quality (LQ) or low-resolution counterparts. However, to generate low-quality/low-resolution frames for the training data set, most of these methods use predefined operators (e.g., bicubic down-sampling) to simulate the LQ input. This limits their general application on real-world video data, especially for streaming video data with high compression rates. In our approach, we take video compression into consideration and train our models with LQ-HQ video pairs that we synthesize by running several popular video codecs with different compression strategies.
Inspired by the recent success in large language models, MSRA pre-trained a base video enhancement model on one million video clips from diverse categories by a self-supervised LQ-HQ restoration paradigm. By further considering multiple types of video compression artifacts from different codecs, the ML model can significantly recover different video content with a large range of video degradation levels (e.g., CRF (Constant Rate Factor) values from 15 to 35 in H. 264). To improve the visual quality of the model, we take a two-stage training strategy in which the first stage aims to recover structure information (e.g., the edge and boundary of objects) while the second stage is optimized with perceptual and generative adversarial objectives for high-frequency textures (e.g., tree leaves and human hairs).
Evaluation metrics in video enhancement tasks is another area we focused on. Existing metrics in this space like LPIPS (Learned Perceptual Image Patch Similarity) and FVD (Frchet Video Distance) cannot fully reflect human perceptual preference. To better understand the performance of our models, MSRA team built an end-to-end pipeline for subjective evaluations of video enhancement tasks. Specifically, we ask participants to annotate their preferences for different methods on real-world video data from ten categories. The participants are asked to not only consider the static quality of each video frame but also take dynamic quality into account, which is more important for improving user experiences. The proposed pipeline shows that over 90% of users prefer our final models to the default bilinear upscaling in the browser.
The base machine learning models certainly improve visual quality over the traditional bilinear up-sampling methods, but the high compute cost makes real-time inference a challenge on common consumer GPUs. To overcome this, the DeepSpeed team applied state-of-art compression techniques to reduce the model size and inference cost.
Video super resolution is an exciting feature that will make your video streaming experience more immersive and enjoyable. It will enhance the details and the beauty of your favorite content. We are working to improve the models and hardware coverage to provide better experience for even more users. Keep an eye out for updates from the team as we increase the availability to all Canary channel users over the coming weeks.
Selkowitz believes the new thin triple super window could save even more. Current double-glazed windows consist of two layers of glass with a low-e coating and argon gas in the gap between the glass layers to further reduce heat transfer. The innovation of the Berkeley Lab super window is threefold: It inserts a third layer of very thin glass sandwiched between the two layers of a double-glazed window, adds a second low-e coating, and replaces the argon gas with krypton gas, which is much more insulating than argon in the very narrow space between the panes.
While there are other triple-glazed windows on the market, the virtue of this one is that it is the same width and virtually the same weight as existing double-glazed windows. This avoids having to redesign the window sash and frame, which would pose a significant cost obstacle to market penetration.
Energy use associated with all aspects of window performance, including heating and cooling, costs U.S. building owners about $50 billion annually. The single largest contributor to building energy use is heat loss in colder climates. In warmer climates, uncontrolled sunlight transmitted through windows adds to air conditioning costs.
The super window was in fact invented and patented by Berkeley Lab more than 20 years ago. But at the time, there were no viable sources of large sheets of very thin glass, and the cost of the materials made it prohibitively expensive.
Still, electrochromic windows, at about five to eight times the cost of the super window, are considered a high-end specialty item and are just beginning to enter markets. The low-e coating, on the other hand, is now in nearly 90 percent of all windows sold.
The challenge in getting there, Hart said, was not only the scientific innovation in coating design but also overcoming marketplace barriers. To make the case, Berkeley Lab researchers created lab and field test facilities to measure window energy efficiency, and also provided design and validation software tools used by industry and a nonprofit organization, the National Fenestration Rating Council, to rate windows. Low-e coatings have saved the country $150 billion in energy costs in homes, compared to a base case with conventional double glazing.
The Windows key seems to not have any use in Ubuntu, but since I'm just coming from Windows I'm used to this key having some function.
How can I make good use of the Windows key in Ubuntu?
I've seen that I can remap keys in System>Preferences>Keyboard>Layout>Options>Alt/Super key behavior, but I have no idea what the choices meta, super, hyper mean. The help button in this dialog doesn't give any specifics about them.
If you want to use it like in windows, to trigger the menu, go to System -> Preferences -> Keyboard Shortcuts. Scroll down to and click on 'Show the panel's main menu'. Click on the right hand cell, which should change to 'New shortcut...' and press the Windows key.
You can bind it to other things as well. You can use it as part of a key combination in for another shortcut or for an action in compiz. For example, I have my Windows key bound to the Initiate Window Resize action (because this is easier than trying to grab the sides of the window) which can be found in Compiz Config Settings Manager under Window Management -> Resize Window.
3a8082e126