aws ec2 calculator

What is the AWS EC2 Calculator for?

The AWS EC2 calculator is right at the top of the AWS Simple Monthly Calculator, even above S3. This is probably a good indicator of the popularity of the service. The more you use a service, the more it will contribute to your cloud costs and so the more it pays to understand it. With that in mind, here is a quick guide to the AWS EC2 calculator.

The key factors in the AWS EC2 calculator

The key factors in the AWS EC2 calculator are the region, EC2 instances, EC2 dedicated hosts, EBS Volumes and Elastic Graphics. Other factors include Additional T2/T3 Unlimited vCPU Hours per month, Elastic IP and Data Transfer. Let’s look at the key factors in practice.

The region

These days a lot of companies will choose their region with legal/compliance issues at the top of their list of priorities and after that, it’s highly likely their next concern will be latency (or the lack thereof). In the unlikely event that all things are equal, however, remember that your choice of region can have a huge influence on the cost of any service. This is particularly obvious when it comes to high-volume services such as AWS EC2.

EC2 dedicated hosts

An EC2 dedicated host is basically a cloud server which is dedicated to your sole use. EC2 dedicated hosts can be a great solution for companies which would like to tap into the cost savings of the public cloud but want or need to keep data in a completely private environment.

One of the nice features of AWS EC2 dedicated hosts is that you can essentially automate administrative tasks via AWS. Basically, you specify your licensing rules and attach them to the AMI. Then you specify your dedicated host management preferences. 

From that point on, in principle, you can treat it as “set and forget” although as always, just because you can, it doesn’t mean that you should. Basically, it can be very dangerous (and/or expensive) to forget about anything in IT and so you should make a point of double-checking everything now and again to make sure it’s still an accurate reflection of your situation.

Amazon EBS Volumes

If you’re going to be storing data which needs to be updated regularly, then Amazon EBS Volumes are a good investment. They are particularly to be recommended if you need to hold data for apps which are used by “frontline staff” such as your customer service team. Not to put too fine a point on the matter, it can be an inconvenience if your back-end staff have to wait a bit, but you may lose business if you make your paying customers wait any longer than they believe is reasonable.

Amazon Elastic Graphics

Amazon Elastic Graphics basically gives your graphics a boost. Whether or not it’s relevant to you obviously depends on the extent, if any, to which you use graphics in your applications.

The issue of data transfers

The reason data transfers are listed as one of the “extra” factors, is because they don’t really relate directly to your use of EC2. They relate to your management of your cloud infrastructure as a whole. In particular, they reflect your ability to design apps which send data along the most economical paths. 

For practical purposes, this basically means that you want data to stay in its own availability zone (sub-region) as much as possible and if that is not possible do your level best to keep it within its own region. AWS charges the highest prices for data transfers between AWS regions and between AWS and the internet so you want to avoid these as much as possible.

For the sake of completeness, there is a little nuance here in some applications such as Amazon CloudFront, which sends users to the data center with the lowest latency at that time, but these exceptions can be ignored for now as they do not have any influence on AWS EC2 costs.

To commit or not to commit?

The aforementioned factors are all, essentially, “technical” factors, but there is one final issue which can greatly influence the cost of the AWS EC2 service and that is the pricing model or models you use. The popularity of EC2 means that AWS offers “Savings Plans” for it. Pretty much like their name suggests, Savings Plans offer AWS users discounts on EC2 (and other services) if they commit to a certain level of usage. 

Using Savings Plans can achieve discounts of up to 72% (under current rates) so if you anticipate using the service a lot, you should probably at least investigate them, especially since they do offer some degree of flexibility. Savings Plans can be used in combination with Reserved Instances and Spot Instances so you have even more opportunities for reducing costs.

aws data transfer calculator

How to Use AWS data transfer calculator

Making the most of your AWS data transfer calculator

A data transfer calculator can be a bit like weighing scales.  You may not want to look at it, but you probably should and you should probably pay attention to what it’s telling you.  Just like with weighing scales, the initial reading only tells part of the story.  If it looks bad (in other words excessive), the reasons probably lie with your behavior – and usually they can be fixed.

What your data transfer calculator is telling you

At a basic level, your data transfer calculator is telling you how much data you transferred between:

AWS and the internet

Different AWS regions

Different AWS availability zones (essentially sub-regions)

Different AWS services

At a deeper level, it’s telling you how good you are at routing your data traffic effectively.

What your data transfer calculator isn’t telling you

There is a bit of a twist to data transfer calculations which is that some services (like AWS Kinesis) incorporate data transfer into the cost of the service.  While this can be convenient, it also obscures the amount of data you are actually using.  Effectively, this means that the only way to keep a lid on these costs is to be really strict about keeping control of your data flows.

The fundamental rules of cost optimization for AWS data transfers

  • Try to keep all data traffic within the same availability zone if at all possible.
  • If that’s not possible, try to keep it within the same region.
  • Minimize the amount of outbound data you send to other regions or the internet.

Point three may look like it is basically an extension of points one and two (and to a certain extent it is), but the basic idea is that you look at traffic routing first and if you reach a situation where you must transfer to another region or the internet, then you do everything possible to keep a lid on how much data you transfer.  Admittedly you should be doing this at all times, but it does take on a whole new level of importance with data transfers between regions and the internet.

(If you can) choose your region with great care

These days, the choice of AWS region is increasingly likely to be determined by the law than by issues of latency and cost.  Assuming you do have a choice, however, then investigate all your options including the non-obvious ones to see which offers the best overall deal.

If you are constrained by legal requirements, then you really need to be on point with regards to managing your data flow and keeping on top of cost optimization, so as to minimize the impact of the extra costs.

Remember Amazon CloudFront

If your issue is data transfers to the internet, rather than to other Amazon regions, then Amazon CloudFront could be well worth a look.  Basically it’s a content delivery network.  It’s free to transfer data from EC2 to Amazon CloudFront.  Charges to transfer data from Amazon CloudFront to the internet do carry a charge and this varies by region. 

On this point, please note, that you cannot select a region for Amazon CloudFront the way you can for most other services.  Basically, your content will be made available from data centers around the world and when a user requests it, they will be directed to the one which offers the lowest latency at that point in time.  This offers the highest level of speed and convenience to the user, but the downside of this is that costs can be unpredictable. 

Having said that, if you know your user base and their habits, you could probably make an educated guess as to what data centers they’re likely to end up using and hence be able to take an educated guess at what your costs are likely to be.

Think about using a private IP address

Amazon charges more for data transfers made using a public IP address, this includes Elastic IP addresses.  If you can make a point of using private IPs as much as you possibly can, you can really make a difference to your data transfer costs.

Experiment with Amazon pricing tools

Amazon is currently in the process of replacing its Simple Monthly Calculator with a new tool called the AWS Pricing Calculator.  At this point in time, it’s unclear when it will launch (it’s currently in Beta testing) or what, exactly, it will offer.  Based on its website, however, it looks like it’s going to offer a whole lot more flexibility and transparency than the current Simple Monthly Calculator.  Having said that, the Simple Monthly Calculator is still a whole lot better than nothing and can certainly be looked at in the meantime.

Stay on top of pricing changes

The information given in this article is correct for December 2019.  Obviously, AWS pricing can and does change.  Make a point of keeping track of these changes so you can take early action if you need to.

AWS lambda calculator

Understanding the AWS Lambda calculator

AWS Lambda Calculator

The AWS Lambda calculator is probably going to become very relevant to you if your company moves to the AWS cloud service. Here is a quick guide to what it is and how it works.

What is AWS Lambda Calculator Anyway?

In the context of AWS, Lambda is a service that allows you to run your code on demand without the need to provision servers. At the current time, Lambda can scale (automatically) from a few requests per day to thousands per second. Given the popularity of the service, it’s entirely possible that it will be developed further so it can scale even higher in the future.

One of the nice features of AWS Lambda is that you just choose the amount of memory you want for any given function and are automatically allocated a proportionate level of CPU power. If you scale the memory up or down, then the CPU power increases or decreases in tandem with it.

AWS Lambda can already run code for many of the major computing languages. These include C#, Go, Java, Javascript (via Node.js), Python and Ruby. Again, its range may be extended in the future, but even now, its offering is probably more than enough for many businesses.

Understanding the AWS Lambda calculator

The bad news is that the AWS Lambda calculator is basic, to put it mildly. In fact, you could reasonably question whether or not it deserves to be classed as a pricing calculator at all, given that it amounts to a drop-down menu of regions plus a list of what they charge per total number of requests and per time block. You then have to do the actual sums yourself, presumably on a proper calculator.

There are two pieces of good news for the mathematically challenged. Firstly, the calculations are so simple; they really can be done with just a basic calculator, even a cellphone calculator. Secondly, if you really want an online calculator, then there are plenty of third-party options available.

Whichever option you choose, you’ll probably find it helpful to understand the basics of AWS Lambda pricing.

Understanding AWS Lambda pricing

The whole point of AWS Lambda pricing is that you literally only pay for what you use. Admittedly, this is often highlighted as a benefit of cloud computing in general, but AWS Lambda takes it to a whole new level. With “traditional” cloud computing, you typically fire up a virtual server, add whatever resources you need, do whatever you need to do and then shut it all back down again.

At least, that’s the theory. As anyone involved in real-world cloud computing will know, in reality, what often happens is that someone spins up the servers and then forgets to shut them down again, or shuts down some of the resources and forgets about the others (like the storage). That is literally impossible with AWS Lambda. It’s either on (in use) or off (out of use). There is no “in-between” or idling and its pricing reflects this.

AWS Lambda pricing depends on region, requests and duration

AWS pricing, in general, is usually dependent on the region, so there’s nothing new there. Requests and duration are what you need to understand, along with the slight twist of “provision concurrency”.

For the sake of completeness, you should also be aware that standard AWS charges will also apply to your usage of AWS Lambda. For example, if you need to transfer data from another region, then the transfer will be charged at the standard EC2 data transfer rates. Likewise, if your functions access S3, then you will be charged the standard fees for the read/write requests (and the data stored in S3).

AWS Lambda requests

In the context of AWS Lambda, a request is simply an instruction to start executing code. This is typically made in the form of an event notification or an invoke call.

AWS Lambda duration

Duration is exactly what its name suggests. You start being billed from the moment your code starts executing and you continue being billed until the moment it terminates (or returns). This is rounded up to the nearest 100ms. The key point to note about duration in AWS Lambda is that the price per time block depends on the amount of memory you allocate to the function. It, therefore, pays to write economic code.

AWS Lambda provisioned concurrency

You can think of provisioned concurrency as a turbo-charge for your Lambda functions. Provisioned concurrency fees are similar to duration fees in that they are based on the amount of memory you allocate to the function and the amount of concurrency that you configure on it.

The big difference is that provisional concurrency fee are rounded up to the nearest five minutes. If your function exceeds the configured concurrency, then you will revert to being billed at the standard rate.

cloud migration cost estimate

How to Create a Realistic Cloud Migration Cost Estimate

Creating a realistic cloud migration cost estimate is essential for keeping your IT budget on track.  Get it right and you’ll get your cloud migration journey off to a great start.  Get it wrong and your finance team could wind up losing sleep over how to balance their books and your colleagues might not be thrilled about seeing “non-essentials” being pulled out of their budgets to compensate for your mistake.

Fortunately, it is possible to create a realistic cloud migration cost estimate.  Here are some tips to  help.

Remember that a cloud migration cost estimate is only as good as the underlying data

This general principle holds true of any estimate, but it’s so important, it’s worth emphasizing.  Basically the difference between an “estimate” and a guess is the amount of research which goes into coming up with the figures.  In theory, overestimating costs can appear less risky than underestimating them but this is actually a debatable point, because it could lead you to avoid taking an action which would actually have saved you money.

It’s a good idea to start by comparing running costs and then move on to cloud migration costs

On the principle of “trust but verify”, it’s generally a good idea to start by checking that a cloud migration would actually reduce costs for you (or result in some other benefit which would justify the increased cost).  While this is likely to be true the vast majority of the time, there may be the occasional exception and there may also be the occasional time when it might be more sensible to delay a cloud implementation.  For example, if you’re planning an office move anyway, you might just want to live with what you have for now and then move everything at once.

When calculating the cost of running your current IT infrastructure, do your best to include indirect costs (such as lost revenue caused by downtime) as well as direct costs (basically anything related to hardware, software and network connectivity, including the costs of the humans involved in keeping them running).  This relates back to the first point.  You need to do everything possible to gather full and accurate data.

On that note, you should be aware that the likes of the AWS price calculator and the Azure TCO calculator are sales tools rather than finance tools, so it’s a good idea to double-check any assumptions they make and see how they apply to you.

Estimating cloud migration costs

Assuming that you’re happy you could save money (or gain some other benefit) be using cloud services, you can then move on to calculating the costs of the actual migration itself.  These are not factored into the pricing calculators offered by the main cloud services so the onus is on you to figure them out for yourself.  While many of the points you need to cover will be fairly obvious, especially if you’re doing a lift-and-shift migration, here are three points you might overlook.

You will probably need to keep data synchronized during the transition period

Even in a lift-and-shift migration, you do not just press a button and have everything transfer over.  Unless you can do your entire cloud migration out of hours, you are probably going to need to keep the data live in your legacy systems until you are ready to “flick the switch” and turn them off and when you do turn them off, you’re going to have to ensure that the data in your new cloud systems is an absolutely exact match of the data in your old systems.  This means that you probably want to budget for skilled labor to make this happen.

Your apps will need to be thoroughly tested before you switch off your legacy systems

Thanks to virtual machines and containers, you can run just about any app on cloud infrastructure.  You may, however, have to make adjustments to them to make them run properly.  What this means in practice is that your intended lift-and-shift migration might turn into a refactoring migration (where you make some adjustments to apps before you migrate them) and/or might require some urgent development work to make legacy apps run in the cloud at all.  Basically, always budget plenty of time and money for thorough testing.

Hiring consultants can make life a lot easier for everyone

It’s highly unlikely that people who provide in-house IT services are going to be specialists in cloud migrations.  Even if they’ve done one (or more) before, perhaps in previous roles, they’re just not going to have the same sort of skill, experience and general competence which comes with regular practice.  Hiring consultants can, therefore, be money well spent.  Not only can they make your life easier, they can stop you from making costly mistakes and guide you towards getting the best value from your cloud migration.

hybrid cloud pros and cons

Hybrid Cloud Pros and Cons

Hybrid Cloud Pros and Cons

Here is a quick guide to hybrid cloud pros and cons for anyone who might be considering implementing a hybrid cloud in 2020.

A brief guide to hybrid clouds

Hybrid clouds mix together elements of private clouds and public clouds.  For completeness, a private cloud is simply a cloud which is used by one tenant.  It may be hosted on-premises or it may be hosted offsite by a third-party provider, but it is only used by one, specific client.  A public cloud, by contrast, is shared by unrelated clients, which means that unless you are actually a cloud service yourself, it will always be hosted externally.  Well-known public cloud services include Amazon Web Services, Microsoft Azure and Google Cloud.

Hybrid cloud pros and cons tend to be two sides of the same coin.

The pros of hybrid clouds all tend to revolve around the fact that they basically offer companies the opportunity to “have their cake and eat it” by combining the security of a private cloud with the flexibility (and opportunities for significant cost savings) of public clouds.  The cons of hybrid clouds all tend to revolve around the fact that they can be much more complex to implement than straightforward private clouds or public clouds.  Let’s look at what this means in practice.

Flexibility versus complexity

These days some companies may be effectively forced to use private clouds to keep regulators happy and even those which are not may choose to “play safe” and go down this route anyway.  Private clouds can offer many advantages over traditional IT infrastructure, but, at the current time, it is highly unlikely to be as cost-effective as public clouds, so companies might prefer to split their data and apps across two systems, using the private cloud for anything sensitive and public clouds for anything non-sensitive.

This sounds easy on paper, but in reality implementing a hybrid cloud can be a whole lot more complex than “just” implementing a private cloud or a public cloud.  To begin with, you can’t “simply” do one, single lift-and-shift migration, perhaps to a containerized cloud infrastructure and then start reworking your apps in the cloud.  As a minimum you’re going to need to undertake two distinct migrations (one for the private cloud and one for the public cloud) plus think about how the two systems are going to interface with each other.

Get this right and you can have the best of both worlds.  Get it wrong, however, and you could find yourself wasting so much time and money on hassle you could have avoided that you might well end up wishing that you’d just paid up for a full private cloud.

Cost control versus lack of cost transparency

Even though private clouds can work out more economical than traditional IT infrastructure (and offer extra benefits), they still need to be provisioned in a similar way.  If you’re planning on running your own on-premises private cloud, then you’re essentially going to be facing the same sort of resourcing issues as you did before you moved to the cloud.  If you use an external provider then you may have more scope for flexibility, but there are probably going to be some limits, for example a commitment to a minimum processing volume over a certain period of time.

Public clouds, by contrast are all about flexibility and can offer great opportunities for cost optimization so that you only pay for what you need.

The problem may come when you start mixing the two, especially if you need to move data between the public cloud and the private cloud and start incurring charges for the traffic.  You then have to try to figure out what is legitimate usage (and what, if anything, you can do to reduce costs) and what is just people being people and doing what is convenient rather than what is most cost-effective. 

If you go for a hybrid multicloud architecture, then life could become even more complicated as different cloud services can apply different prices and pricing structures to what is essentially the same service, plus you have the potential for even more traffic moving between the different cloud platforms.

Disaster recovery versus security concerns

Public clouds and externally-hosted private clouds have in-built resilience to them.  It is, quite literally, your cloud vendor’s job to do whatever it takes to keep services running and they will build their business around that fact.  For example, they will set up their facilities in buildings and areas which offer maximum security and stability, rather than maximum convenience for staff and customers.

At the same time, handing over any data to an external party basically means that your security is only as good as theirs.  Now, for some SMBs this may not be an issue, especially since SMBs at the smaller end of the scale may not have the means to implement effective security on their own.  For others, however, it could be a major concern which would need to be addressed very seriously.

multi cloud cost management

cloud-cost-optimization-strategy

Cloud Cost Optimization Strategy for 2020

Make it a New Year’s resolution to improve your cloud cost optimization strategy for the year 2020 (and beyond).  Here are some tips and suggestions to help.

Remember that cloud cost optimization strategy is an ongoing process

It’s human nature to notice changes which are big and/or sudden, but it’s easy to miss the little changes, which happen one day at a time, until they reach a stage when they start to make a big and therefore noticeable difference and you suddenly realize that you really need to do something about them.  With that in mind, make a New Year’s resolution to go over your cloud cost optimization strategy at least once a year to ensure that it keeps pace with changes in your business and potential changes to the cloud services themselves.

Start by educating yourself and your colleagues

First of all, you need to ensure that you understand the practicalities of cloud cost optimization strategy.  This starts with understanding the pricing of the cloud environments you use and this can be something of a challenge.  For example, even though Microsoft Azure and Amazon Web Services both run along broadly similar lines, they each have their own sets of pricing structures, which can require quite a bit of reading to understand in depth and which are, of course, subject to change at any time (although there are options for locking in prices, such as using reserved instances).

You need to understand cloud cost pricing yourself before you can educate your colleagues on what it means for cloud cost optimization strategy, or perhaps it would be more pragmatic to say, for the company’s bottom line and therefore ultimately for their salaries, bonuses and/or share options.

This last point can be very important since it can be challenging to find a “stick” with which to enforce compliance with the sort of measures which can create real cost savings.  If, however, you can show people that they would personally benefit from reduced costs, for example, if some of the money saved were to be passed along to them in some way, it could be much easier to get them on board.

For the sake of completeness, passing on the benefit from reduced costs doesn’t necessarily have to mean increasing salaries or providing bonuses (although it can do).  It can be something as simple as telling people that if they company can save X amount of money you will spend Y on buying something they would like, which can be anything from pizza to a pool table.

Learn to love cost-monitoring tools

Maybe “learn to love” is a bit strong, but learn to get to grips with them at any rate.  As an absolute minimum, get confident with the tools provided by your cloud services.  Once you’re comfortable with them, try out some third-party cost-optimization tools such as Cloudability and Cleanshelf.  Even though many of these tools are chargeable, they can often more than make back their price in cost savings.

Really think about what cloud region(s) you use

Admittedly, in some cases, you will not actually have a choice of cloud regions due to legal restrictions, but if you do then it’s definitely worth taking the time to think about it.  Although being in the nearest region to your customers may reduce latency, depending on where you are located it may not be the most cost-effective option.

Consider a move to tiered storage

Obviously, any data you need to access regularly and/or quickly is going to need to be on a cloud storage solution which reflects that, but in most companies there’s at least some data which is being archived “just in case”, often for tax/regulatory compliance reasons. 

In many cases, this data is probably never going to be needed and even if it is, then it’s not going to be needed immediately.  If you need convincing about this, then look at the data you hold and check how long you would have to comply with a request for it.  If it’s longer than a couple of business days, then you should be absolutely fine with the 4 hours or so it will typically take you to “thaw” data out of the likes of Glacier.

Do your sums on pay-per-use services

There are basically two ways to pay for cloud services.  One way is to book resources and use them as much (or as little) as you want and the other way is to pay per use.  The former tends to be geared towards heavier usage and the latter towards lighter usage.  This means that you need to be very careful about opting for pay-per-use services (such as serverless functions) as they can work out very much more expensive than just paying for resource to use as you wish, but they can also work out much more economical for resources you genuinely only need on an occasional basis.