im trying to read in a log file in c# thats huge - approx 300mbs of raw text data. ive been testing my program on smaller files approx 1mb which stores all log messages into a string[] array and searching with contains.
however that is too slow and takes up too much memory, i will never be able to process the 300mb log file. i need a way to grep the file, which quickly filters through it finding useful data and printing the line of log information corresponding to the search.
the big question is scale, i think 300mb will be my max, but need my program to handle it. what functions, data structions, searching can i use that will scale well with speed and efficiency to read a log file that big
After not being able to figure anything out I saw some people with similar issues saying that it may be the network card so I went out and purchased a USB3.0 to Ethernet adapter, still capped at 300mbs, and repeated all the processes on that as well.
Just bought a new M6 Mark II, and it came with a Sandisk UHS-I 170 MB/s card....and I always buy a spare battery and SD card for new cameras......so went ahead and bought a Lexar UHS-II, 300 MB/s card....and figured some on here may have tested similar speed cards and depending on the cameras sometimes they do not seem to take advantage of the faster speed or notice a difference between them.
For background, UHS I and II are two different data bus specifications. II is newer and faster. The M6ii supports II, but not at its maximum speed. That may be why folks see a speed increase, but not one that's orders of magnitude faster. Reports from last year showed about 50% faster, IIRC.
So now that it's Black Friday season it's time to pick up some more camera equipment. I've been looking into getting more SD cards and am wondering what the benefit is to the more expensive cards that can go 300mb/s vs more affordable ones that go 150-170mb/s. I'd be using these cards mainly for wildlife and I'd want cards that could write as fast as possible as I'm always using continuous shooting. Is the 300mb/s necessary or is it more used in videography? I'm looking into getting a 128GB memory card.
Long before the buffer fills up, the camera already starts writing images to the memory card. These two things happen in parallel: the camera keeps shooting, adding shots to the buffer, and at the same aims to empty the buffer by storing the shots on the memory card. If writing to the buffer is faster than writing to the memory card, it depends on the speed of the memory card how long it takes until the buffer is full.
With a 300mb/s card, the shooting goes on at full speed for about 60 shots: that's the moment when the buffer fills up, so further shots can only be taken when the previous one has already been written to the card.
That being said, I use the fastest UHS-II SD cards not because of the faster speed in the camera, but because they speed up ingesting the images onto my computer. If you're only copying 10-20 Gigs at a time you may not notice a big difference, but if you're copying 60 - 80 Gb it could save a lot of time.
I have a 1gbps internet connection. When i run a speed test I'll usually get around 500-600mbps down and 800mbps up. However, other then the speeds i get from speedtest.net, everything else seems to max out at around 300mbps down. I've don't think i've ever downloaded anything from anywhere faster then that.
With the SanDisk Extreme PRO Compact Flash memory card, you get high storage capacity, faster performance from shot to shot, and cinema-quality video. With high transfer speeds, this card offers fast, efficient performance you expect.
I've been doing some disk performance benchmarks as part of my evaluation of vSphere 6.5 and have a few questions about behaviour that I can't explain. The most puzzling one of which is a write benchmark which runs faster for a guest VM after I've created a snapshot.
This is the thing that I'm finding really puzzling. No tricks with zeroes, no overhead with snapshot file extension ... writes of non-zero data to the test file on the guest for a non-snapshotted VM was around 450MB/sec. Add a snapshot, first write non-zero data to pre-extend it ... and then the same write test by the guest shows 730MB/sec write performance, much faster. Why?
Both the base data file and the snapshot file are in the same directory on the same storage volume on the hypervisor, mounted on the HP hardware RAID-5 device. Does ESXi have performance issues with large files? Like traditional Unix filesystems had, for example, when triple-direct inode blocks were involved with writes to large files? The base data file for the VM is 50GB in size, the snapshot file 40GB (since the only real write activity is my tests of writes to the 40GB test file). But then again the 730MB/sec performance was observed as uniform throughout the sequential write test to the 40GB file on the guest. Why would ESXi be so much faster in writing to the 40GB snapshot file?
This standard was developed as videographers and filmmakers began shooting higher definition video and required faster read/write speeds. Additionally, the video speed class standard supports multi-file recording. That is, when a camera also wants to record additional data such as geographic location, height from the ground, date and time, and others. This makes it ideal for photographers and videographers that use drones, action cameras, and 360 cameras.
Early SD cards had a default bus speed of 12.5 MB/s. However, as the demand for faster cards increased, the SD Association introduced new bus interfaces to enable faster bus speeds, and as a result, increase the theoretical read and write speeds. Today, most cards use one of the following, ultra-high-speed (UHS) bus interfaces:
Using Oracle on EMC storage 60MB/s is certainly possible.Using that same old Oracle on exadata can be slightly faster, 300MB/s and sometimes higher, again, depending on the configuration. A full rack is faster than a half rack ...
1st way is in general is much faster than the 2nd one. In my case, copying a 300mb code folder it says 45 minutes (which I only waited 5 minutes, didn't bother to finish it), but when I try to do the same with winRAR extract, it takes only about 45 seconds to finish.
The data has to be read and written. The amount of written data is the same in both cases but reading a compressed file means that less data has to be read. Furthermore it is usually much faster to read a single big file than to read a directory. This effect is bigger if there are many small files. You can reduce it by reading the directory structure into the cache so that the disk does not have to jump between the inodes and the data blocks:
45360ec4cc