I've got an FAS2240-4 that is currently sitting in a datacenter for DR purposes, I've also got a Dell R720 sat there as well, with the plan of using it as a VMware host in the event of a DR.
The FAS2240 is snapMirrored to another Netapp here at the office. I created an NFS volume on it to use for VMs on the Dell R720, however we're seeing 150-500ms response times in VMware for the datastore with 1 VM booting up.
I looked at different posts for statistics and ran some, I see all of the disks are constantly close to 100% utilized, snapMirror didn't appear to be running at the time and deduplication wasn't running. How can I figure out what is utilizing the disks?
I ran Statit several times, the below was run for about 10 seconds, results attached.
ANY help is appreciated, I'm absolutely new to Netapp, thanks!
The deswizzling process can be disabled, but if and when a disaster occurs, your DR system will run very slowly because remapping will take place until remapping will finish. So you interested deswizzling to run in advance.
This Netapp is acting as a DR filer for ours at the office, purely for file shares, I was going to chunk out some space for VM's to run on top of it as well, but it sounds like as long as deswizzling is enabled and running, the BSAS disks are going to constantly be busy trying to keep up (unless it's never able to finish, stuck in a loop because it keeps snapmirroring and restarting?).
I'm using WinXP, and I notice that in the Task Manager, on the Performance tab my CPU usage is 100%. But When I go to the Processes tab, and sort by CPU System Idle Process is taking up 90%. I've double checked that things like AVG anti-virus isn't actively scanning, and that jkdefrag is not defragging. But I know something is taking CPU cycles, because the machine is very slow. How do I determine which process is hammering the CPU if it doesn't show in the Process List?
I have been using snapraid for several months on four disk array. I am quite newbie. I have recenly changed a disk. After the disk change, after usuall successful synching process, I run snapraid status and saw that there were a lof of fragmentation. After reading about fragmentation in the manual, I gave the following command which removed fragmentation..
However after this command, snapraid status is reporting as the array is %100 not scrubbed. I scrub the array daily and sometime manually to investigate issue but scrub is finishing quickly and says nothing to do.
*Every run of the command checks about the 8% of the array, but not data already scrubbed in the previous 10 days. You can use the -p, --plan option to specify a different amount, and the -o, --older-than option to specify a different age in days. For example, to check 5% of the array older than 20 days use:
Thank you again for the response, I really now noticed the graph. It did not make sense to me until you explain it. But I did not understand your explanation. English is not my native language so please bare with me.
hi - thanks - I did manage to get that to stop... I think it was the polarity setting on the moog exp pedal (i reset and restored).. I've updated topic... it works now, but only switches between 0 and 100% - nothing in between?
i did see the option for "position" which it kind of 'forced' if I selected FS2... unless there's another place to set it that I didn't see, I'm pretty sure that was set as you've indicated it should be. Either way, for the moment, I am taking datacommando's advice and trying a different expression pedal. Pretty sure I can still return the Moog. Thanks!
well... thanks for the comic relief.. FWIW, I have looked in manual, but even the part you quote doesn't seem to address how to connect a single expression pedal, and there were a bunch of folks on the forum - including the other guru above - who seemed to indicated the M audio pedal would work without modification. I guess I'll find out one way or the other tomorrow. Thanks!
Anyway, if it's a Moog Exp Pedal, like the EP3, you just need a Y Cable, TRS to TS/TS (like pic 1), and a custom cable with TRS on a side (that you'll plug to your exp pedal), and a TS on the other side (that you'll plug into one of the two TS female from that Y cable). The custom cable it's just a TRS cable with the ring wire cut and isolated on the TS side, and only Tip and Sleeve properly weld to the jack terminals. This worked for me 100% of the time (even with few TRS Expression knobs) , so I guess it should help your situation too. Then, if it is a M-Audio, it should work as it is. STILL, if it's a HX Stomp, you need that Y cable.
I tried that, but for me doesnt work. I mean, if I connect a TRS pedal, straight into EXP port, does the 0/100/0 thing, even with all properly setup as the manual. Manual isnt clear if the example it's using a TS pedal or a TRS tho... a bit foggy.
We are recently seeing an issue with our SQL server (2008R2) instance hitting almost 100% and locking out everything. The servers console is responsive enough to get in a an restart the services but we are a at loss as to what is causing this.
I have been thrown in at the deep end on this and have no experience managing SQL server apart from a bit of add users / databases and some SQL experience. So I am relying on google to try to fix this but am not getting very far.
I have read some articles on logging the processes that are running and sending emails when the load average gets to high. I currently have the trigger set to 20% but some time I get nothing at all so it suggests that the load is just getting to high very quickly before it has a chance to send emails.
Is there anywhere I can look before I restart the services that does not need SQL Management Studio (I can't connect or do anything inside an active session once the server gets to this state) or after I have restarted to track down where the problem lies.
The latest logs show that everything worked until 02:00 today and then the first error after that is at 02:01 - [298] SQLServer Error: 258, Shared Memory Provider: Timeout error [258]. [SQLSTATE 08001]
The trick is going to be identifying what is causing the so much CPU to be used. Since you're just getting started and you're on 2008, the best approach is to set up a server-side trace. I'd suggest reading Gail Shaw's articles. Part 1[/url] and Part 2[/url]. That's going to be the easiest way to get going identifying the root cause.
This step is also optional. If the server is currently maxed out and this is interfering without you running diagnostics, then set the database to RESTRICTED_USER mode, which will drop all non-admin connections.
While logged into SQL Server instance as SYSADMIN, execute the DMV query described in this article. What it will do is return the TOP 10 cached execution plans by order of CPU utilization. It's not just looking at running queries, it's looking at cached execution plans since the last time the server was restarted. Included are some metrics along with text of the offending SQL statements.
It's looking at queries whose plans are still in cache, not plans that have been cached (and potentially discarded) since the last time SQL started. It's a decent start, but it can easily miss things. It'll never show queries that have a RECOMPILE hint, queries in a proc created WITH RECOMPILE and anything that's had its plan discarded and compiled recently (due to memory pressure, stats, index changes, etc)
I have discovered the max memory setting and will be changing that later today once it has been approved. I'll be setting it to 21GB and monitoring the free RAM. I got the info from a 2009 post by GlennBerry.
The max memory is currently still set to the default values - no one here has any SQL training so things are installed with defaults everywhere. Hopefully I'll get some courses in the new year and be able to understand this a bit better.
Thanks for the tips on not stopping and being able to get a connection, it is just what I needed. I'll test this on our dev server and use it if / when we get the problem again. I'm hoping the max memory does the trick.
A combination of the max memory, being able to login and getting the info about current cachedsee what is happening and getting info about current execution plans should get to the root of the problem.
p.s. I do not have much experience of Reporting Services as it is not used much here. Where it is used, I have always ensured that it is not installed on the DB server as I suspect it may use a fair bit of memory and CPU to render images.
Also, confirm if SSIS / SSRS / SSAS or something like an AntiVirus service may be running on the box alongside SQL Server. Except for a high volume OLTP database, it's unusual to see SQL Server max out the CPU at 100 and then stay in a holding pattern for any significant length of time.
This happened to us. We were running SQL Server on a 16GB TS server (don't ask) and one day SQL Server just pegged the CPU. Impressive, considering this machine has dual Xeons and an otherwise light load. Did it every single day after that too.
Our fix was to add another 64GB of memory. ? Worked like a champ. SQL Server 2008 R2 typically uses 6GB of memory on our server, but can spike up to 40GB for backups. Don't know why it solved the problem, and frankly don't care.
795a8134c1