I need the max width of the grid to be 1380px, plus margins. If your browser window is larger than that 1380px and you scale the windows width below that, the entire grid scales down both in width AND height.
I kind of have it working once the browser width is scaled below the max grid width but before the browser width gets below that 1380px the height of the grid is effected by resizing the browser width.
VW, VH, VMIN and VMAX (viewport units) are supported by Webflow and most browsers now, and my technique comes from a time when the support of those units was weaker. I tested your suggestion and it works really well.
Hello, so as a developer currently studying I am working on a laptop under Windows 10 OS, so I would like to install Ubuntu OS on my daily laptop, the problem is that I know in some rare condition I will need to still have a working Windows 10 environment working for groups project and/or use of specific software.
So my question is the following:Is it better * to install Ubuntu and use a Windows 10 Virtual machine when needed or to use Windows 10 ans an Ubuntu VM running most of the time on it ?
Put Windows on the VM. That way if you never have to worry that porting your Windows installation to a new machine will invalidate the installation ;) If you'll be using Ubuntu more often than Windows, having Ubuntu as the host will save you a lot of boot time.
Pro tip 2: Windows is considerably more of a resource hog than Linux (generally speaking - I'm sure you could configure a Linux install to use lots of resources if you tried hard). You'll need to give the VM an absolute minimum of 8Gigs for the VM to even be usable and min of 16Gig for decent performance.
If you are using Ubuntu as your daily operating system, then install Ubuntu on your physical computer and install Windows 10 as a guest OS in a virtual machine. The large size of Ubuntu's default software repositories makes it easy to install and upgrade the development software that you use every day. Because you're going to be installing a lot of software in Windows 10, the bare minimum for disk space is 25 GB1 2 (preferably on the SSD), and you will probably need much more than that, especially if you plan on installing Microsoft Visual Studio in Windows 10.
For example, let's say you need to use Microsoft Visual Studio. You can install the more lightweight Visual Studio Code from the default Ubuntu repositories with sudo snap install code --classic. It is possible to run Python, C, C++, JavaScript, PHP, Java, R and some other programming language code blocks directly in Visual Studio Code using the Code Runner extension. You can install some of your favorite Visual Studio extensions in Visual Studio Code, and switch from VSCode in Ubuntu to Visual Studio in Windows 10 when things get out of hand. You're going to need a virtual machine in order to quickly and easily switch back and forth between Ubuntu and Windows 10.
If you are planning to do graphics-intensive tasks in the VM, there is a distinct advantage in using VMware Workstation 11 or later over VirtualBox. In VMware Workstation 11 for graphics-intensive applications, 2GB of video memory can be allocated for additional workload processing power, compared to a maximum of 256MB maximum video memory in VirtualBox. VMware Workstation 15 and later supports virtual graphics memory up to 3GB. Your computer has a Core i7 CPU and 16GB RAM, so there should be no problem allocating 2 virtual CPUs and 8GB virtual RAM to the guest OS.
I spend 70% of my time on my computer inside a VM. The VM gets 10 of my 16gigs for ram (could possibly stretch it a bit more but as people say Ubuntu doesn't need as much ram) and access to all my CPU cores as well as giving it 3d acceleration and 3gigs of Vram.
Ubuntu is awesome and sometimes think what if it had all the hardware to itself? ? Every now and then I decided I should try installing it as my main and see if it has gotten easier to set up. My experience so far has been painful.
Every time I install ubuntu as a main I spend ridiculous amounts of time just trying to get it to work as good as it does in a VM with some drivers making it unstable, run slower and not to mention wasting a truckload of my time. And then hoping to open windows in a VM to run a game or something CPU or GPU intensive, how can I expect it able to run well when the host OS can't even use the hardware properly.
You can only run the VM at 60hz which is a downside however, somehow the mouse movement is still buttery smooth typing is responsive even if the app windows inside only refresh at 60hz. (don't ask me how this is working)
Another plus is when you need to say run several versions of PHP and apache and maybe an android app you can easily just do it all. I've still haven't found a way to hot-swap apache and PHP and MySQL on ubuntu at all let alone as easily as windows can.
Or maybe you need DirectX for some game development project on the side or many other scenarios where ubuntu just can't do it without up to a week of stuffing around and it's not going to run as fast if windows is inside the VM under Ubuntu.
And then when you feel like a break, just suspend your VM and open AAA game titles running at max capacity, 144hz with free sync and all your custom gear working perfectly because is all run faster in windows albeit at the cost of an extra gig of ram being used meh.
I have 16g/ram and I give 10g to the VM and windows still has enough for steam downloads, discord and a heap of other game launchers downloaders and even a web server running in background tasks to run while I'm working in the VM.
Vbox is better if you're not fussed about graphical performance, don't want ubuntu's animations and want to frequently switch between windows and fullscreen Ubuntu with host key shortcuts. Plus it also has snapshots feature where u can save the machine at multiple stages and just boot up a previous state if something goes wrong. ( if Vbox had the same Graphical performance as Vmware I wouldn't consider VMware at all )
When things go back (network errors bad data what not), SQL2005 decides to rollback everything to its original state. This process can take hours and is not needed. I am looking for any of the follwoing:
You could try moving your processes into SSIS. If it is as simple as your example (copy form one table to another), you could try the SQL Server destination and the OLEDB Destination objects. They allow you to configure batch and commit sizes and do work pretty fast.
Otherwise, you could take your example of processing batches broken up into pieces, but instead of running the stored procedures in a loop, send them into a service broker queue. If you do this, you can configure your broker queue to process groups of records in parallel and still retain control over how many parallel processes can run. Then, by adjusting the batch size and the number of batches running in parallel, you could probably get back to a number far closer to your original performance without having a single, large batch.
As a simple test, run it as a loop in five query windows. In each window, have it only run every 5th batch (use mod or something to get it to skip) and see if your performance improves. Keep in mind that this parallel processing may hit your CPU or disk sub-systems pretty hard.
Wait a minute... let's back the performance pony up a bit. Are you saying that your staging table is an exact copy of everything that the target table will eventually have in it??? :blink: I mean it has ALL of the rows whether they are new or updated?
The number of rows does not change at this stage, I need to score a complete universe. What does change is I pull a subset of the columns (50-70%) and modify their datatype and/or categorize them in the process. Afterwards, I take the transformed results and use them in model scoring. Those types of queries take long and when they brake things rollback. I typically use CREATE TABLE followed by INSERT INTO SELECT. This at least permits me to see progress compared to a SELECT INTO statement.
So, no I am not duplicating data. The scoring script is essentially weighing of values via a case statement and sum those results in 1 value. Picture a 100 case statements with plus signs connecting them into 1 single outcome.
You might want to check out what can be gained with SELECT INTO instead of create table and then insert.. The gain is in the logging, if any, since you can have select into to do 'minimally logging' - ie only log the allocated extents instead of the entire rows as insert will. If volumes are large, this may be one area that can be tuned.
That would be part of the reason why things take so long... why is it necessary to change the datatype? It also opens a source of rollbacks if the data happens to be incompatible with the changed datatype.
This works best if the index you're using is the Clustered index... Otherwise you will be doing multiple table scans to find all of the data (which will be VERY inefficient). Also - 5000 is usually too small a chunk (it's going to slow the process down somewhat), for most processes. I'd start around 50,000, and perhaps work up from there, to find the ideal size for your process.
c80f0f1006