During the AWS 2016 re:invent AWS CEO Andy Jassy announced multiple new updates to the EC2 instance roadmap. AWS is making updates to their high I/O, compute-optimized, memory-optimized instances, expanding the range of burstable instances, and expanding into new areas of hardware acceleration including FPGA-based computing. This blog post summarizes re:invent’s announcements about EC2 and a few links to a couple of other posts that contain additional information.
AWS has launched the next generation of memory-optimized EC2 instances. These instances improve upon the popular R3 instances with a larger L3 cache and faster memory. When we talk about network optimization, they support up to 20 Gbps of ENA-powered network bandwidth when used within a Placement Group in combination with 12 Gbps of dedicated throughput to EBS. To learn more, read : AWS launches it’s Next Generation (R4) Memory-Optimized EC2 Instances.
Expanded T2 Instances –Few days back on 30th Nov,2016 AWS has added two new larger T2 instance sizes –t2.xlarge with 16 GiB of memory and t2.2xlarge with 32 GiB of memory. These new instance sizes enable customers to benefit from price and take advantages of performance of the T2 burst model for applications with larger resource requirements (this is the third time that Amazon Web Services has expanded the range of t2 instances; included t2.large instances last June and t2.nano instances last December). To learn more, read : Expanded T2 instance type by adding T2.Xlarge and T2.2Xlarge Instances.
New F1 Instances – The F1 instances give you access to game-changing programmable hardware known as a Field-Programmable Gate Array or FPGA. You can write code that runs on the FPGA and speeds up many types of genomics, seismic analysis, financial risk analysis, big data search, and encryption algorithms by up to 30 times. AWS has launched a developer preview of the F1 instances and a Hardware Development Kit, and are also giving you the ability to build FPGA-powered applications and services and sell them in AWS Marketplace. To learn more, read Developer Preview – EC2 Instances (F1) with Programmable Hardware.
And these improvements are in Progress:
New Elastic GPUs – You will soon be able to add high-performance graphics acceleration to existing EC2 instance types, with your choice of 1 GiB to 8 GiB of GPU memory and compute power to match. The Amazon-optimized OpenGL library will automatically detect and make use of Elastic GPUs. We are launching this new EC2 feature in preview form today, along with the AWS Graphics Certification Program.
New I3 Instances – I3 instances will be equipped with fast, low-latency, Non Volatile Memory Express (NVMe) based Solid State Drives. They’ll deliver up to to 3.3 million random IOPS at a 4 KB block size and up to 16 GB/second of disk throughput. These instances are designed to meet the needs of the most demanding I/O intensive relational & NoSQL databases, transactional, and analytics workloads. I3 instances will be available in six sizes, with up to 64 vCPUs, 488 GiB of memory, and 15.2 TB of storage (perfect for those ERP applications). All stored data will be encrypted at rest and the instances will support the new Elastic Network Adapter (ENA).
New C5 Instances – C5 instances will be based on Intel’s brand new Xeon “Skylake” processor, running faster than the processors in any other EC2 instance. As the successor to Broadwell, Skylake supportsAVX-512 for machine learning, multimedia, scientific, and financial operations which require top-notch support for floating point calculations. Instances will be available in six sizes, with up to 72 vCPUs and 144 GiB of memory. On the network side, they’ll support ENA and will be EBS-optimized by default.
Refer to AWS official blog post for other relevant information if seems interested.
I’ll be sharing more information about each of these instances as soon as it becomes available, so keep following and enjoy the innovation…!!!