Get Cloud Readiness Assessment
In the context of digital transformation, Cloud is often a priority for organisations to have a competitive advantage and to accelerate business operations. Cloud migration; that’s almost synonymous with the cloud, as it is a process of moving organisations' workloads to the cloud, which includes migration of people, process and technology.
Effective cloud migration begins with the right foundation. A pre-migration plan plays a crucial role in effectively moving the workloads to the cloud without any risks. A cloud readiness assessment can renovate organisations' vague idea of moving to the cloud into a comprehensive plan that clarifies how to move data in a phased and planned manner without disrupting your business and it determines the order in which the events should occur. A Cloud Ready assessment is the first step to formulate a plan.
The assessment is a process that takes the applications and data you want transferred to a cloud platform and determines the best way to do it. It takes into consideration the impact on business operations, the minimal effort needed to move the applications, and what the best practises are for your business and environment. This assessment calculates your cloud readiness, or how ready your applications are for moving to a cloud platform. However, before completing this assessment, there are a few tasks you as the owner need to do first to help make this process succinct and accurate.
Current Infrastructure Analysis
First, before conducting a cloud readiness assessment, you need to analyse your current infrastructure. This is where you’ll record what your current on-premise system is like. Here are a few things to consider.
How Much Are You Consuming?
Think about how many resources you’re using, such as things like hardware, storage amount, and network. What are the demands you put on them?
The Number of Users
How many people access your database regularly and work with the information? This is important to know because a lot of cloud vendors build price models around the number of active participants.
What Your Current Costs Are
Figure out how much you’re spending on your current system, including maintenance and security. Then, decide if that’s a price you want to decrease, increase, or keep the same. Understanding your IT infrastructure is crucial in figuring out what system you need next. So, in addition to the criteria above, you need a grasp of what integration capabilities you already possess and how easily the applications can be modified.
What Do You Need?
Recognising your current system is only half the battle. Next, it’s time to think about what you need that your current infrastructure does not offer. Cloud platforms offer loads of optimised functions like real-time analysis and data visualisation. Take the time to explore available features and determine what benefits you.
Gap Analysis
Now that you’re aware of where your current systems lie, you need to complete a gap analysis. Cloud-based solutions offer flexibility and cutting-edge technology. But you need to predict how current your applications will react to the migration. This analysis helps you achieve that. Some applications will need re-hosting, re-platforming, and new architecture, so highlight those specific apps.
Estimate Costs
All business operations come at a price, so it’s best to be a step ahead. There are multiple ways to go about estimating future costs. But first, you have to know what features you require in the new platform, how many users there will be, and a realistic view of what you can afford.
Map Your Migration
While the readiness assessment will determine the best route for cloud migration, it doesn’t hurt to do a little road-mapping yourself. The map should contain all the information from your initial analysis, partnered with step-by-step instructions of what actions need to be taken. Include a detailed timeframe for the different phases as well.
On Your Way to Cloud Readiness
Understanding cloud readiness and prepping your systems for a much-needed facelift is only a small fraction of work, compared to the amazing advantages headed your way. We, at V2 Technologies, have the experience to facilitate your transition to the cloud in a planned, predictable and timely manner.
Best Cloud Migration Strategy
Cloud migration is a tall order: your migration strategy should be robust and it should help achieve key business objectives, all while being executed in Agile sprints that allow you to incorporate ongoing feedback. Here at V2 Technologies, we help our clients kick start their migration projects without losing focus on their end-point objectives and business continuity.
Re-Hosting is the most straightforward cloud migration path. It means that you lift applications, virtual machines, and server operating systems from the current hosting environment to public cloud infrastructure without any changes. It is a low resistance migration methodology that prescribes picking up an application as an image that could be exported via migration tools like VM Import or CloudEndure to a container run on the public cloud.
You should be aware that its quick win comes with a drawback: limited use of cloud efficiencies. Simply rehosting an application workload in the public cloud doesn’t utilise cloud-native features, such as efficient CI/CD automatization, monitoring systems, automated recovery and self-healing, containerized environments, or open-source compatible services. However, you will still be able to reduce efforts spent on system administration and gain some time to refocus technologists on solving business problems and product optimization.
This cloud migration type can also be a starting point for large scale optimization projects when organisations are looking to shift their on-premise infrastructure in a short period of time.
Replatforming involves certain optimizations to the operating system, changes in the API of the applications, and middleware upgrade as you do standard lift-and-shift. As a result, you can leverage more cloud benefits, reshape the sourcing environment and make it compatible with the cloud, fine-tune the application functionality, and avoid post-migration work.
Before implementing some product enhancements, it is important to keep in mind that the underlying codebase will be changed. This means that even insignificant changes require thorough retesting of your application performance. Once you have implemented the planned adjustments and up-versioning, the application can be moved to the optimised platform and cloud servers.
The re-platforming strategy is somewhere in between simple lift-and-shift and a more profound re-architecture of the application. Thus, the alterations in the codebase are more likely to be minor and are not supposed to change the core app functionality. For example, you may want to add new features or replace the application components.
In this strategy, you change the proprietary application in use for the new cloud-based platform or service. Often, that means that you drop the existing licence agreement (or it expires) and go for a new platform or service in its place. For example, you may choose to switch from your legacy CRM system to a new SaaS CRM that meets your organisation’s requirements better.
This approach is driven by a strong desire to improve your product and represents the opposite of lift-and-shift migration. It is assumed that a specific business target will be set from the beginning, e.g. in terms of availability or reliability of the application performance. Sometimes, that means that you have to re-engineer your application logic completely and develop the cloud-native version from scratch.
Opting for this cloud migration model, you should take into account that it may require more resources due to the increased complexity of its implementation. On the other hand, it allows the full use of cloud-native benefits, such as disaster recovery or containerization of the application environment. In the long run, refactoring can be more cost-efficient because these additional features have been added.
One of the most common examples of refactoring is shifting a mainframe monolithic application to microservices-based infrastructure in the cloud.
Some components of your IT infrastructure may be retained on your legacy infrastructure. An organisation may want to keep some stand-alone workloads and databases due to security issues or other constraints. For example, you may have to comply with regulatory requirements governing the locations in which certain information is stored. When categorising the workload for this type of migration, you create a hybrid infrastructure whereby some workloads are hosted in the cloud and some workloads are retained on-premise.
For many complex applications and environments, some infrastructure components can be easily turned off without any decrease in productivity or value loss for the end consumers. This is achieved by decommissioning or archiving unneeded parts while replacing their functionalities through other services and components. As a result, you can substantially reduce the complexity of your computing, architecture, storage, licensing, and backup, making your infrastructure leaner.
Top Cloud Development Frameworks & Design
We can help leverage AWS tools to streamline your development, build, testing deployment & monitoring processes. The intent is to reduce the time to code, test, re-code, re-test, deploy and monitor your applications.
Few of the tools/ide's/products are listed to get a better idea on the importance of choosing the right product for the right use case.
AWS Cloud9
AWS' cloud-based integrated development environment (IDE), enables developers to write, run and debug code in any supported internet browser. This IDE supports more than 40 programming languages, including popular options like JavaScript, Python and C++. Cloud9 features include syntax highlighting, outline views of specific files, and code auto-completion as you edit. While there is no direct cost for using Cloud9, standard charges for the compute and storage resources consumed still apply.
AWS CodeCommit
Is a managed source control service for code collaboration. This secure cloud developer tool hosts private Git repositories and handles scaling for the underlying infrastructure. CodeCommit works with any Git command or tools developers have implemented.
AWS CodeStar
Provides a user interface and templates to help engineers develop, build and deploy applications. With CodeStar, IT teams can manage any software development tasks -- such as setting up a continuous delivery tool chain or permissions -- from one place.
AWS CodeArtifact
AWS' managed artifact repository, enables developers to store, publish and share software packages in a secure manner. This service integrates with Maven, Gradle, npm, yarn and other package managers and build tools.
AWS X-Ray
Analyzes and debugs distributed applications in production. With the help of this cloud developer tool, users can gain a better understanding of how their applications and underlying services are performing. This way, IT teams can identify and address any performance hiccups through the production process. X-Ray can analyse a range of different application types, including three-tier and complex microservices applications.
AWS Cloud Development Kit (CDK)
Is an open source software development framework. It defines cloud infrastructure as code through programming languages and AWS CloudFormation deployment. With CDK, developers can interact with their cloud applications. For example, they can list the defined stacks in their apps, move the stacks into CloudFormation templates and deploy them to any public AWS Region.
AWS CodeBuild
Is a continuous integration service. It collects source code, completes tests and creates packages that are ready to be deployed. With CodeBuild, developers and administrators don't have to worry about provisioning, managing and scaling build servers.
AWS CodePipeline
A continuous delivery service, enables users to model, visualize and automate the software deployment process. A developer can model an entire release process and then build, test and deploy the application to a workflow once there is a change in the code. It is also possible to integrate custom or partner tools into CodePipeline at any stage of the release process.
AWS Device Farm
Is a cloud developer tool focused on increasing the quality of applications, time to market and overall customer satisfaction through testing with real mobile devices. With Device Farm, developers upload an application or test scripts to run tests across a large number of real Android and iOS services. Within this tool, developers can also debug or reproduce any customer issues.
Keeping your data secured and private
The AWS security model involves a shared responsibility between you and Amazon. You are responsible for user authentication, securing user access, operating systems, applications, networks, and third party integrations. Amazon, in turn, provides secure infrastructure in the following forms:
AWS provides features and tools for securing the aspects you are responsible for. Nonetheless, it is up to you to keep tabs on the security configurations, implement the settings appropriately, and manage access and privileges granted to users and third party groups.
We, at V2 Tech, intend to facilitate a governance mechanism allowing an objective view of your compliance, or the lack of, of your obligations in making sure your applications are safe.
We also recommend, through a review process, set of AWS managed services or tools that can be leveraged to help secure your applications further.
Virtual Private Cloud
Run your machines inside a private subnet inside an AWS VPC. Very few (if any) of your servers should have public IP addresses. Ideally your EC2 instances have private IP addresses only. The only way to get web traffic to an instance should be through a load balancer that is forwarding traffic to specific ports on specific machines. This greatly reduces your attackable surface area.
Security Groups
Security Groups act like a firewall for your instances to limit what ports instances can get traffic on and from what sources they accept traffic from. Instances in a private subnet should only accept traffic from a load balancer, or from other internal instances. If you do choose to run some instances in a public subnet with public IP addresses they should only accept traffic on public service ports, for example only on port 80 (HTTP) and port 443 (HTTPS).
No SSH
AWS has numerous orchestration platforms, such as Elastic Beanstalk, Elastic Container Service, and EC2 Simple Systems Manager. With these tools you can interact with your instances securely without ever needing to SSH to instances. There is no excuse for having your port 22 open to the world. Instead your instances should be provisioned, setup, and operated with hands off automated tooling, and monitored using logs and metrics that are extracted from the machine via tools like AWS CloudWatch logs, and AWS Cloudwatch Metrics. SSH should almost never be necessary to operate an instance.
AWS Relational Database Service Encryption
AWS RDS provides a fully managed relational database platform that both backs up as well as secures your data. You can encrypt your data in transit by enabling SSL for connections between your instances and your RDS database, as well as encrypt your data at rest using encryption keys.
AWS Elastic Block Store volume encryption
Amazon EBS is networked block storage volumes for your instances. You should enable encryption for these volumes if you are self hosting a database and storing private user data on these volumes.
S3 encryption
Sensitive data should be encrypted with a key prior to putting it into AWS S3. You can either let AWS manage the encryption key, or you can manage the encryption client side using a key you provide.
IAM Roles
It is critical to use IAM roles. Each user (or process) that interacts with your account should have its own credentials, and the ability to assume one or more IAM roles. If you have a single credential pair that is being shared by all users of your system you are at risk. Instead each developer or admin should have their own credential pair, each CI/CD process that builds your code should have its own credentials, each instance in your cluster should have its own credentials, and each service that runs on those instances should have its own credentials.
No static credentials
Static credentials are always a security risk. If you have static AWS API credentials committed to your code repository you are doing it wrong, and are very vulnerable. You should be using IAM Roles for instances or if running in Elastic Container Service use IAM roles for tasks to give your running processes unique, short-lived and automatically rotating credentials. For human developers and admins you should also vend them short lived credentials.
AWS Key Management Service
When you have sensitive data to store you should be encrypting it using a key. AWS KMS gives you an easy to use SDK for provisioning keys and using them to encrypt your data.
AWS Cloud Trail
AWS CloudTrail continuously monitors and records all the API calls being made on your AWS account, and in combination with giving each user their own credentials it gives you an auditable trail of all the actions taken by each user of your system.
Amazon Macie
Amazon Macie uses machine learning to discover the data that you have in your AWS account, and learn how it is accessed. It can warn you if you have for example a public S3 bucket containing private information, or if access patterns for data change suddenly, indicating that someone may be misusing their access to data.
AWS Web Application Firewall
Filter out potentially malicious traffic before it reaches your application using AWS WAF. Using WAF you can create rules that can help protect against common web exploits like SQL injection and cross-site scripting.