azure lift and shift

Is Azure Lift and Shift Right for You?

Is Azure Lift and Shift Right for You?

Azure lift and shift migrations get you into the cloud quickly, but they generally only provide a minimal range of benefits compared to using cloud-native applications. It is, therefore, advisable to consider all your options before choosing your migration method.

The basics of Azure lift and shift

Technically, Azure lift and shift are known as Azure rehosting and both descriptions give a fairly accurate summary of the migration method. Basically, you make an exact copy of your existing on-premises environment and replicate it in a cloud-based platform.

For the sake of completeness and clarity, it is essential that the lift and shift itself is exactly that. In other words, resist the temptation to try to resolve any issues with your migrating applications during the migration itself, no matter how simple the fix might seem. Either resolve issues before you migrate the applications to the cloud or wait until the migration is complete and tests show clearly that it is exactly replicating your existing systems. Then do whatever it is you need to do to deal with the matter.

As you may have guessed from this description, Azure lift and shift migrations are often the best option when you’re fairly happy with what you have already and just want to migrate to the cloud for the business benefits it offers, especially the potential for quick, easy, cost savings. Sometimes a lift and shift migration may be all you need to do, but a lot of the time, it’s just the start of a journey of improvement toward cloud-native architecture.

Azure lift and shift versus refactoring

Refactoring is sometimes known as repackaging, possibly because it often involves the use of containers, which are essentially virtualized operating systems which can be stacked, one on top of the other, on a virtual machine. You also have the option to use infrastructure as a service (IaaS) and platform as a service (PaaS) products.

Refactoring basically means changing the design of an application but making few, if any, changes to the underlying code. Basically, it’s one (small) step beyond lift and shift and is used for many the same reasons. If you’re fairly happy with what you have but do want to give it a bit of a wash and brush before you migrate to the cloud, then refactoring is probably the way to go. As with lift and shift migrations, most of the work will probably come after you have migrated to Azure.

Azure lift and shift versus architecture

The architecture migration method basically means reworking an application’s code so that it really can take advantage of what the cloud has to offer. It’s only usually feasible when you can devote an extensive amount of time and/or resources to getting it right.

In fact, it’s probably best if you have your own in-house developers who are familiar with the application(s) and/or extensive documentation (preferably and) because otherwise, you could end up finding that it would have been a whole lot quicker, easier and more affordable, just to have done a plain-vanilla lift and shift and then worked on the development of the app(s) when it/they were already in the cloud, probably in an agile manner, so staff had the benefit of incremental improvements while the company, as a whole, benefitted from incremental cost savings, which would help to offset the development costs.

Opting for the re-architecture migration method is a big decision and is probably only really advisable in situations where companies have already made a significant investment in their existing applications and believe that spending a bit more would be justified on the long-term return and where those existing applications are so important that companies want them to be cloud-ready from the moment of cloud deployment.

Azure lift and shift versus rebuilding

Rebuilding means exactly that. You create (or recreate) an application from scratch using cloud-native technologies so that what you end up with is something that is both resilient and highly scalable and which can be deployed without the hassle of needing to manage software licenses, underlying application infrastructure, middleware or any other resources. Basically, the application is all yours, and Azure platform as a service (PaaS) and/or infrastructure as a service (IaaS) takes care of everything needed to make your application run.

Perhaps rather ironically, there are two ways rebuilding can be used. The first way is similar to the architecture migration method. Basically, if you’re happy with what you have and you think it’s important that it can take full advantage of everything the cloud can do, right from the moment of deployment, then rebuilding can be the way to go.

Alternatively, rebuilding can be what happens after a lift and shift migration as part of a continual process of improvement. The downside to this approach is that you will have to shoulder the costs of the lift and shift migration and then the costs of the rebuilding, but the upside of it is that these costs will be, at least in part, offset by the cost savings you can make through being in the cloud.

See Also

Limitations of azure tco calculator

aws lift and shift, AWS lift and shift

The Five Quick Wins of an AWS Lift and Shift Migration

AWS Lift and Shift Migration

AWS lift and shift migrations can get your applications into the cloud relatively easily. The word “relatively” is important, because any migration to the cloud does require some degree of advance planning. It’s just that lift and shift tend to be the most straightforward way to get legacy systems into cloud infrastructure with the least degree of hassle. In particular, AWS lift and shift migrations offer five quick wins.

AWS lift and shift migrations get “quirky” applications in the cloud

In the early days of IT, documentation standards were very different from what they are today and in any case, back then, even in IT, “documentation” usually really did mean physical, paper documents, which were often expensive to print and were vulnerable to issues such as loss, fire flood and just degradation through age. This means that many older SMBs are probably going to have existing applications that need to be kept active for the time being but which cannot just be reworked for use in the cloud.

The standard workaround for this is to move to the cloud via a container (which virtualizes the necessary operating system) and thus allows staff to keep going and then build a new cloud-native application when there is available time and resources.

AWS lift and shift migrations give speedy access to cost savings

While it has to be said that lift and shift migrations are unlikely to give you access to all the cost efficiencies cloud-native applications can offer, they can still offer quite a lot in the way of reduced costs. In particular, they can allow companies to cut back on their in-house infrastructure, with all the IT-management issues that they bring with it. They also offer more scope for scaling up and scaling down, not necessarily to the same extent as you can with cloud-native applications, but certainly more than with traditional on-premise data centers.

AWS lift and shift migrations effectively eliminate the worry of hardware failures

In principle, AWS can experience hardware failures, in fact, it probably causes them all the time, but in practice, its customers are very unlikely to notice them, at least not to any meaningful extent. The only time this would be likely to happen is if AWS had a problem that effectively caused one of its data centers to go offline and since there are always at least two data centers in any Amazon region, all that would happen would be that all transactions would go through another data center (or centers) causing a potential increase in latency.

While this might be something of an irritation, it is highly unlikely to cause the same sort of hassle as hardware failures in on-premise data centers, especially if it involved legacy hardware, which would be difficult to replace. The alternative to taking the risk of hardware failure, especially legacy hardware failure, would be to go to the expense of buying spare parts which you might or might not need (and which might or might not work after a lengthy storage period).

AWS lift and shift migrations provide an effective disaster-recovery strategy

While the single, biggest driver of the shift to cloud services is probably the opportunities for reduced costs, disaster recovery probably isn’t far behind. A huge benefit of cloud computing over traditional disaster recovery strategies is that it shifts the emphasis from locations to services.

In other words, if staff are prevented from accessing their regular working location, they don’t have to find some way to get themselves to an alternative, physical site, they can just work from wherever they can get online. Not only does this save on the costs of maintaining an external disaster-recovery site, but it also eliminates the problem of your best-laid plans going wrong because your staff is unable to get to that, for example, if major roads are closed.

AWS lift and shift migrations provide a route to remote working

Even though this is last on the list of quick wins, it can still be a very major benefit. Cloud computing is known for its scalable nature and the reason that this is so valued is that companies of all sizes often have peaks and dips in their business cycles.

In addition to wanting to adjust their cloud services to match their business needs, they may also need to adjust their staffing levels. This may not be too difficult if a company needs “traditional” seasonal workers, especially at times when students are on vacation, but it can be quite a challenge if companies just want to have someone work for short and/or intermittent periods.

Basically, if you want someone to work on-site, then they have to be happy that the cost and time of the journey will be justified. When you move to the cloud, however, you have the option to support remote working, which can make it much easier to bring people (typically freelancers) on board for short “bursts”.

See Also

AWS Cost Optimization

AWS Cloud Pricing

AWS Cost Calculators

aws s3 calculator

AWS S3 Pricing Calculator: Extended Description of S3 Pricing Calculator

AWS S3 Calculator: Extended Description of Amazon’s Pricing Calculator

Before we dig into Amazon’s S3 cost calculator, try this advanced S3 cost calculator developed by our PhD cloud economists.

S3 cost calculator is one of Amazon’s paramount management tools. Amazon S3 or Amazon Simple Storage Service, is a storage platform offering colossal scalability, data protection, data availability, enhanced performance, and many more astonishing features.

It is an object storage service provided through the web interface. AWS S3 is the same scalable storage platform used by the official website and network of Amazon, the e-commerce giant.

No matter what the business or industry size is, AWS S3 has enough space to store and protect user data & info. And for storage purposes, one needs to pay a specific amount.

You can calculate your S3 cost by using the S3 pricing calculator below:

 

What is Amazon S3 & why is there a need for an AWS S3 calculator?

Amazon S3, also known as the Amazon Simple Storage Service, is basically a storage platform for the data present on the internet.

When companies or tycoon businesses require large storage capacity at a low price, Amazon S3 becomes the first choice. The data is stored in several geographical locations for better data security and backup.

Businesses find it extremely helpful and necessary for improved trading and internet operations.

Amazon S3 is known for features such as top-notch security, durability, and enhanced scalability.

Companies can easily collect, store, and scrutinize the data & info from a particular platform at low prices.

What Makes AWS S3 Calculator Helpful?

When using the AWS platform, there is a need for an S3 monthly calculator to get an accurate estimate for the pricing.

It not only serves the pricing purpose but also gets you familiar with the required business resources to save some bucks. When using the calculator, you will come across numerous tabs and rows defining available instance types.

Amazon S3 stores a great quantity of data from all corners of the world. For instant data recovery and backup, the data is stored in numerous locations.

And, yes, with the S3 calculator on the go, you can easily calculate the prices of all instances, plans, and more.

AWS S3 calculator is not so complicated to use; most of the tabs and columns are self-explanatory. The field list might seem lengthy, but all of them are easy to understand and follow.

But, one needs to pay strict attention when choosing plans or to answer questions related to data transfer IN and OUT.

Anyways, using this tool is not so challenging, and you can easily decide on the plans or required resources as per the business needs.

 

AWS S3 Calculator: The Major Cost Components

First things first, AWS S3 offers some exceptional features for storage and data transfer. You pay only for the used resources and that too without a minimum fee.

Storage Capacity and Data Transferred are the main factors behind the overall cost.

AWS S3 pricing calculator gives monthly cost estimates for choosing the right storage plan. You can choose the storage volume usage plans – low, medium, or high – according to the requirement.

S3 pricing mainly depends on four major components:

  • Storage Capacity (Required and Used)
  • Request and Data Retrieval
  • Data Transfer Pricing (Bandwidth Required)
  • Data Management Features

From these four major cost components, only the storage amount and data transferred quantity make much difference.

The data transfer costs are calculated from the data transferred OUT of the Amazon S3 platform; data transferred IN is free.

AWS S3 Calculator: Pricing Details and Factors- Extended Version

From high-level scalability and superior security to enhanced performance, the AWS S3 offers multiple benefits. And all these merits are accessible to match any business, organization, or individual requirements at extremely lower costs.

As said, there are four significant factors on which the overall pricing depends. The Amazon S3 stores millions of data and applications from around the globe.

Allow us to explain the factors on which the overall pricing depends in detail:

Storage

Data, info, applications, and more are stored in the Amazon S3 buckets. The total amount of data stored on the platform also adds to the total cost.

There are basic tiers or classes present on the S3 platform to store and transfer data. Some of the classes are:

  • Standard
  • Intelligent Tier
  • Standard (Frequent or Infrequent Access)
  • Reduced Redundancy Storage (RRS)

Request and Data Retrieval

For each request made, you need to pay an amount. There is a small fee for each request made on the platform.

Even the requests made to access the data or buckets result in adding to the overall budget.

The AWS S3 calculator offers the flexibility to choose a number of visits and requests on the same platform to avoid additional charges.

Data Transfer

Whether the data goes OUT of the S3 platform, you are charged a small fee. However, the data transferred within the S3 bucket is not chargeable (data transfer IN is free).

You need to pay an appropriate fee for every downloaded or accessed file on the S3 platform.

Data Management

Users need to pay specific amounts for storage management too. Be it data management, object analysis, security, access, or enhanced security, everything is chargeable.

And the costs are extremely less for these services; that’s why most companies prefer Amazon S3.

Wrapping Up

All in all, the AWS S3 calculator is an absolute must to get a monthly estimate for overall pricing.

The pricing calculator also helps to choose the plan which best suits the business requirement. It works best to get a rough estimate of monthly bills, making decisions much more straightforward.

lift-and-shift-cloud

The Basics of Effective Lift and Shift Cloud Migrations

Lift and shift cloud migrations are often the most pragmatic way to move to the cloud. While they are the simplest approach to cloud migrations, they do usually require significant advance planning in order to be successful.

The basics of lift and shift cloud migrations

The basic idea behind lift and shift cloud migrations is that you literally create an exact replica of your existing on-premises infrastructure. This means that you absolutely must create a full and accurate map of what you already have, down to the last dependency. In particular, you need to look at all the connections in and out of the application and its data.

In a true “lift and shift” migration, all coding (and data) will be lifted into the public cloud exactly “as is”, but if you want to push a point, you could also look to include applications which require only minor adjustment to work in the cloud, although technically this would be refactoring rather than a genuine lift and shift approach.

It is impossible to overstate the importance of backups

In all probability, if you have put in the necessary advance planning, then your lift and shift cloud migration will go without a hitch. If it doesn’t, however, then back-ups may save you a lot of blood, sweat and tears, not to mention overtime hours. In addition to backing up your databases, back-up your support files as well.

Containerization is often the way to go

Containers are basically virtualized operating systems and they have all kinds of benefits. Whole articles could be written about just why they’re so great, but in terms of the lift and shift approach to cloud migration, the key point to note is that they make it straightforward to test your software configuration in the cloud before you actually deploy it into production.

Thorough testing is as important as backups

Even if everything looks like it has gone perfectly, test it thoroughly. In a worst-case scenario having to roll back your cloud migration is still probably going to be a whole lot less hassle than waiting for an issue to start impacting your production environment, especially if it is the sort of issue which could be noticed externally and/or which could get you into trouble with regulators. Remember that the fact that an existing application runs perfectly on virtual machines hosted on-premise does not totally guarantee that it will work when you switch to cloud computing at least not at first.

Pro tip – resist the temptation to add in any new features as part of the migration. This is the complete opposite of the lift and shift approach. The whole idea behind the strategy is that you take something which you know is already working and just move it, as is, to the public cloud. You then test to confirm that it is still working. If you add new features, you are basically setting yourself up to have to deal with a situation when you need to do hours and hours of work just to figure out whether an issue relates to the migration itself or to bugs in your new code.

Triple-check you have met all legal requirements

These days most companies have to abide by stringent data-protection requirements and many have to abide by other legal requirements as well. Before you retire your old systems, triple-check that you are still in the clear for all of these. It would be rather ironic if all the cost savings made by using cloud computing were to be completely outweighed by and entirely-avoidable regulatory breach.

The real work usually starts after lift and shift cloud migrations

Lift and shift cloud migrations move your workload to the cloud, but that’s basically it. In some cases, that will be all you can do, at least in the short term. If your company has been in existence for some time and has had bespoke applications created, then there’s a good chance that some of them will be poorly documented and/or close to impossible to reverse engineer. In such cases, the most pragmatic approach can be to work in the cloud so you get the full advantage of all of the cost savings it can offer (not to mention its general flexibility) and then create a new, cloud-native, application which replicates the functionality, but applies modern standards (and is appropriately documented).

In many cases, however, existing applications can be successfully adapted so that they effectively become “cloud-native” and hence can really maximize the cost savings which cloud computing has to offer. This isn’t necessarily easy, although there are often some “quick wins” to be had, such as adapting existing applications to auto-scale instead of using a single server, but it can make a massive difference not just to your cost efficiencies but also to your overall productivity.

azure lift and shift

hybrid cloud infrastructure

The Pros and Cons of Hybrid Cloud Infrastructure

Understanding the pros and cons of hybrid cloud infrastructure starts by understanding what it actually means both in theory and in practice.

Hybrid cloud infrastructure contains elements of private and public clouds

The theory is simple enough.  A hybrid cloud has some cloud infrastructure which is for the exclusive use of a single tenant (a private cloud) and some infrastructure which is shared between multiple tenants according to how the provider manages it (a public cloud). 

Private and public clouds each have their own advantages and disadvantages

Private clouds have the edge on security and may be quicker.  Public clouds can be more flexible and more economical.  If you implement your hybrid cloud infrastructure well you can have the benefit of both private and public clouds, there are, however, some potential downsides.

A hybrid cloud environment is invariably more complex than both private and public clouds

If you’re running private and public clouds then, by definition, you’re running two computing environments.  You may well find yourself running more than that.  For example, if you need to keep some non-cloud infrastructure, at least for the time being, then that’s a third computing environment.  If you want to use different public cloud services, perhaps for back-up, or for different purposes, then that’s another computing environment.

Private clouds can be challenging and/or expensive to implement

There are basically two ways you can implement a private cloud, one is to do it yourself (or contract a firm to do it for you) and the other is to use a third-party vendor.

If you choose to run your private cloud yourself, then you will shoulder all the up-front expense of the initial cloud implementation and have all the responsibility for making it both secure and reliable.  You will have no third-party vendor to call for support if you have issues so you are either going to have to recruit and retain your own in-house expertise or hire consultants.  You are also going to need to manage capacity while keeping within budget. 

If you choose to go with a third-party vendor then you can save yourself all of the up-front expenses and ongoing responsibility, however a private cloud will still cost more than a public cloud and you’re still going to have to play your role in managing both environments.

The first question to ask, therefore, is whether or not you actually need a private cloud at all or whether you would be fine with a public cloud.  If you do come to the conclusion that you need a private cloud, perhaps for security reasons, then the second question to ask is whether or not it’s really worth the extra hassle of implementing a public cloud as well.  In other words, how much is that extra flexibility (and potentially economy) really going to benefit your organization?

The answer to this question will probably depend partly on your size at the moment, partly on the extent to which your capacity fluctuates throughout your business cycle and partly on your plans for growth, however, if it doesn’t look like you’ll get all that much use out of public cloud services any time in the near future, then you might find it easier all round just to stay with a private cloud for the time being and expand to a hybrid cloud later if it becomes necessary and/or desirable.

A hybrid cloud infrastructure can have a lot of security risks

First of all, you’re going to need to make absolutely sure you can host data in the cloud at all and if you can, you’re going to need to make absolutely sure you comply with any restrictions.  For example, can you use a public cloud or do you need to use a private cloud and in either case are there geographic restrictions on where your data can be held (and to where it can be transmitted)?  Get data in the wrong place and you can be looking at major issues with regulators even if nothing untoward has happened to it.

Then you’re going to need to make absolutely sure that you’re implementing security correctly in all your computing environments.

In short

Hybrid cloud environments can allow companies to have the best of both worlds by offering the speed and security of private clouds together with the scalability and economy of public clouds, but this only has value if you actually need all of these factors in the first place.  If you’re a small SMB on a tight budget then sticking with public cloud services is likely to be the most sensible option, unless you really need maximum security and/or speed in which case, just paying for a private cloud may be more sensible than trying to implement a hybrid cloud and getting it wrong.  If, however, you are a larger SMB and are really wanting to make the most of everything private and public clouds have to offer, then hybrid cloud infrastructure could be an excellent choice.

azure cost optimization

Azure Cost Optimization Strategies for Cloud Cost Saving

Azure Cost Optimization

A cloud implementation can be a lot like New Year’s Resolutions. Everything’s great for a month or two and then things start to slip and by the time the next New Year rolls around, you’re wondering just what could have happened.

If that sounds like you, then the long-term solution is to develop a robust cloud strategy to bring your cloud costs under control. If, however, you need to bring down your cloud spending as quickly as humanly possible, here are some “quick fixes” that can also have benefits over the long term.

Delete unused disks

One key point you need to understand about Azure is that it does not automatically delete your disks when you delete a virtual machine. What’s more, it’s highly unlikely that it’s ever going to for the simple reason that Microsoft is highly unlikely ever to want to be held responsible for you losing data you wanted to keep. This means that the onus is on you to remember to do it yourself.

Deleting disks in Azure can be a bit of a pain, but it’s important to make a point of doing it. Realistically, you want to instill it into everyone that they must clean up their disks after each use and then, once a week or so, do a double-check and clean up anything anyone’s missed. If the same people are routinely failing to clean up after themselves, take it up with them and/or their manager.

Deal with idling resources

Sometimes you need to leave a virtual machine idling, but there is a potential solution to this. Most times, however, idling resources are, bluntly, a sign of bad Azure cost management or, in other words, just a waste of money.

Often the easiest way to deal with the issue of idling resources is to give every asset an owner who is responsible for managing the Azure billing and, ideally, handing over the money out of their own departmental budget. If nobody is willing to own a resource, then it gets turned off. If nobody screams, it isn’t needed. Ideally, you will want to back this up with support to help them with their Azure cost optimization because there is a good chance that the issue is going to boil down to rightsizing and that is a genuine challenge.

Size your resources appropriately

Correct resource-sizing really is the foundation of all Azure cost optimization and it is not as easy as it sounds. In fact, realistically, instead of spending human blood, sweat, toil and tears on the matter, it’s probably a very good idea just to invest in a cloud cost optimization tool to analyze your usage and make recommendations on sizing. This is likely to end up working out substantially more affordable (and less hassle) than the number of staff hours it would take to achieve the same result – assuming you could work it out by yourself.

Check out B machines

Over the long term, platform-as-a-service may come to be seen as standard, but for now, a lot of the time, virtual machines are the most practical way to go. The problem with virtual machines is that they are billed the whole time they are powered on, regardless of whether or not they are actively in use. In principle, the answer to this is to power off the virtual machines. In practice, however, there may be instances when a virtual machine has to be kept available at all times, even though it’s hardly ever going to be used and that can have a painful impact on your Azure billing. One potential solution is to use “B machines” or burstable machines, which are designed for this specific purpose and can offer significant cost savings over standard virtual machines.

Split out your databases

In simple terms, if you have a SQL database that needs a lot of computing power, then you’re probably just as well to move from a regular SQL server to Azure SQL. Even if you have a SQL database that uses minimal computing power but which has predictable usage patterns, it may be just as well to move from regular SQL to Azure SQL.

If, however, you have a collection of databases with “spiky” usage patterns, then you may find that regular Azure SQL gets very expensive very quickly because you have to resource for peak periods and leave the resource lying idle (but being charged) the rest of the time. If this sounds like you, then Azure SQL elastic pools could be just what you need.

Basically, as their name suggests, this allows you to buy a “pool” of resources that you can share amongst various databases. Ideally, you want all the databases in the pool to have similar usage patterns so they can all use the resources equitably. If you have databases with higher resource requirements, then it’s usually best either to split them off into their own pool or to set them up under regular Azure SQL.

See Also

AWS Fargate Price Reduction