T2.micro Bandwidth Limit

0 views
Skip to first unread message

Jeanett Fite

unread,
Aug 3, 2024, 5:09:19 PM8/3/24
to mobapepa

Also the instance is small and it has vary low CPU and EAM usage but itgenerates a lot of content, so it can be considered like a web serverserving small amount of static files (not of a big size) to many clients.

What I'm considered is if there are limitation of bandwidth by Amazon itself. Many VPS service providers limit the bandwidths to lets say 10MB/s, are there such limits at Amazon and if they are what are they?

From what I've been able to find on the AWS forums - It doesn't seem like the support people from Amazon want to answer that question. Their advice is to test it with an external source: AWS forum post from 2012

Older posts (post1, post2 refer to transfer speeds in coalition to instance size. The 2nd one mentions that the data was a part of the AWS documentation but later it was replaced with stuff about I/O.

As with bandwidth, AWS doesn't publish any concrete numbers, only "Low", "Moderate", "High", etc. I ran into some problems with PPS limitations, and it was even less published than bandwidth numbers, so I ran some tests.

There was a lot more that I found, too, around guaranteed throughput and best effort PPS (packets per second). I put it into a blog on monitoring packets per second on EC2 where I show graphs and tables better than I can show in a comment.

To tie it back to Amazon's Network Performance Designation ("Low", "Moderate", "High"), you'd probably be shocked to know there's little correlation between actual bandwidth and actual PPS to those designations. They are worthless - only rely on test results, not published categories from AWS.

Instance bandwidth specifications apply to both inbound and outbound traffic for the instance. For example, if an instance specifies up to 10 Gbps of bandwidth, that means it has up to 10 Gbps of bandwidth for inbound traffic, and up to 10 Gbps for outbound traffic. The network bandwidth that's available to an EC2 instance depends on several factors, as follows.

Baseline bandwidth for single-flow traffic is limited to 5 Gbps when instances are not in the same cluster placement group. To reduce latency and increase single-flow bandwidth, try one of the following:

A single-flow is considered a unique 5-tuple TCP or UDP flow. For other protocols following the IP header, such as GRE or IPsec, the 3 tuple of source IP, destination IP, and next protocol is used to define a flow.

The available network bandwidth of an instance depends on the number of vCPUs that ithas. For example, an m5.8xlarge instance has 32 vCPUs and 10 Gbps networkbandwidth, and an m5.16xlarge instance has 64 vCPUs and 20 Gbps networkbandwidth. However, instances might not achieve this bandwidth; for example, if theyexceed network allowances at the instance level, such as packet per second or number oftracked connections. How much of the available bandwidth the traffic can utilize dependson the number of vCPUs and the destination. For example, an m5.16xlargeinstance has 64 vCPUs, so traffic to another instance in the Region can utilize the fullbandwidth available (20 Gbps). However, traffic that that goes through an internet gateway or a local gateway can utilize only 50% of the bandwidth available (10 Gbps).

Typically, instances with 16 vCPUs or fewer (size 4xlarge and smaller) aredocumented as having "up to" a specified bandwidth; for example, "up to 10 Gbps". Theseinstances have a baseline bandwidth. To meet additional demand, they can use a network I/O credit mechanism to burst beyond their baseline bandwidth. Instances can use burstbandwidth for a limited time, typically from 5 to 60 minutes, depending on the instancesize.

An instance receives the maximum number of network I/O credits at launch. If the instance exhausts its network I/O credits, it returns to its baseline bandwidth. A running instance earns network I/O credits whenever it uses less network bandwidth than its baseline bandwidth. A stopped instance does not earn network I/O credits. Instance burst is on a best effort basis, even when the instance has credits available, as burst bandwidth is a shared resource.

The Amazon EC2 Instance Types Guide describes the network performance for each instance type, plus the baseline network bandwidth available for instances that can use burst bandwidth. For more information, see the following:

You can use CloudWatch metrics to monitor instance network bandwidth and the packets sent and received.You can use the network performance metrics provided by the Elastic Network Adapter (ENA) driverto monitor when traffic exceeds the network allowances that Amazon EC2 defines at the instance level.

You can configure whether Amazon EC2 sends metric data for the instance to CloudWatch using one-minuteperiods or five-minute periods. It is possible that the network performance metrics would show that an allowance was exceeded and packets were dropped while the CloudWatch instance metrics do not. This can happen when the instance has a short spike in demand for network resources (known as a microburst), but the CloudWatch metrics are not granular enough to reflect these microsecond spikes.

This is actually quite difficult to answer. T2 instances in AWS are "burstable" instances and have the ability to scale with "CPU Credits". Bandwidth can be directly correlated to the credit system since CPU can affect overall throughput. So you would have baseline performance, but that could increase depending on your CPU credits, thus "low to moderate" as far as network performance.

Even if it is burstable, it also have a range. I have no money to cost it for unlimit burstable. do you understand? I hope AWS can give me this range. "low to moderate" is very confused to many peoples like me. I am very angry.

Given that the instance is rated "low to moderate"; and the other answers (which boil down to: "performance is variable because it is a burstable instance" you can safely assume that "low to moderate" means less than "up to 10 Gigabit" but it is variable so there's no specific number. The better question here is "what bandwidth do you need?"

Have a look at -network-performance-cheat-sheet/ - it's a few years old now but may still be useful. There it says t2.micro tested as 0.06 Gbit/s sustained with 0.72 Gbit/s bursts. The "sustained" performace roughly doubled with each instance size increment.

I think I understand you're after definite numbers provided by AWS, not numbers that third parties have benchmarked which have a lot of confounding factors. But TBH I think you've got all the AWS-provided info you're going to get. We've worked with AWS for many years in a large enterprise and haven't come across anything more reliable to work with that these benchmarks.

aws ec2 describe-instance-types --filters "Name=instance-type,Values=c5.*" --query "InstanceTypes[].[InstanceType, NetworkInfo.NetworkPerformance]" --output tableU will get something like this

every one, these days, I did test and test how "low to moderate" is. I just use iperf3 to test my two t2.micro instances bandwidth, and those two vms are still in my free hours. but my aws account incur costs.Oh my god. It's very unfair!!!!!!"low to moderate" is not the range of number. "low to moderate" induceds me cost. I am very angry.

T3 instances are the low cost burstable general purpose instance type that provide a baseline level of CPU performance with the ability to burst CPU usage at any time for as long as required. T3 instances are designed for applications with moderate CPU usage that experience temporary spikes in use.

T3 instances offer a balance of compute, memory, and network resources and are a very cost effective way to run a broad spectrum of general purpose workloads including large scale micro-services, small and medium databases, virtual desktops, and business-critical applications. T3 instances are also an affordable option to run your code repositories and development and test environments.

T3 instances are designed to run the majority of general purpose workloads at a much lower cost. T3 instances work by providing a baseline level of CPU performance to address many common workloads while providing the ability to burst above the baseline for times when more performance is required. T3 instances makes use of credits to track how much CPU is used. T3 instances accumulate CPU credits when a workload is operating below the baseline threshold and uses credits when running above the baseline threshold. T3 instances are unlike any other burstable instance available in the market today since customers can sustain high CPU performance, whenever and however long required.

T3 instances start in Unlimited mode by default, giving users the ability to sustain high CPU performance over any desired time frame while keeping cost as low as possible. For most general-purpose workloads, T3 Unlimited instances provide ample performance without any additional charges. If the average CPU utilization of a T3 instance is lower than the baseline over a 24-hour period, the hourly instance price automatically covers all interim spikes in usage. In the cases that the T3 instances needs to run at higher CPU utilization for a prolonged period, it can do so for a small additional charge of $0.05 per vCPU-hour. You can also choose to run in Standard Mode where a T3 instance can burst until it uses up all of its earned credits. For more details on T3 credits, please see the EC2 documentation page.

T3 instances feature either the 1st or 2nd generation Intel Xeon Platinum 8000 series processor (Skylake-SP or Cascade Lake) with a sustained all core Turbo CPU clock speed of up to 3.1 GHz, and deliver up to 30% improvement in price performance compared to T2 instances. T3 instances provide support for the new Intel Advanced Vector Extensions 512 (AVX-512) instruction set, offering up to 2x the FLOPS per core compared to the previous generation T2 instances. T3a instances feature the AMD EPYC 7000 series processor with an all core turbo clock speed of up to 2.5 GHz. T3a offers a 10% lower price than T3 instances for customers who are looking to further cost optimize their Amazon EC2 compute environments.

c80f0f1006
Reply all
Reply to author
Forward
0 new messages