Cloud Computing Trends

TUSHAR RANJAN
10 min readJun 4, 2022

Cloud computing, from a technological viewpoint, can be considered to have an extensive impact on how advancement in tech happened. Pick any sector, whether it’s heavily tech dependent or not, and you can always find that there is something related to the cloud. Tech, which was invented in the last 10 years or the ones which existed even before that has evolved to use the cloud in one form or another, technologies such as virtualization, distributed computing, and networking are the examples.

Image source Storyblocks

It includes distinct elements such as clients, data centers, and distributed servers. It is a revolutionary technology that is scaling in each geographical location and is providing flexible computer resources as services with the best possible cost-efficiency and pay-per-use basis. This cutting-edge technology works in tandem with other technologies to provide users with convenient and effective access.

On an individual basis or for corporations, cloud computing is beneficial for both. Considered Individually, the cloud has changed our day-to-day lifestyle. We’re almost surely using cloud-hosted apps when we use them to update our social media profiles, binge a new subscription series, or check our bank balances. These programs are not installed on our hard drives or devices; instead, they are accessed over the internet.

As technology evolves there come new challenges which lead to new changes which become the talk of the decade and then eventually become its trends. So, in this blog, we will be covering some recent trends for cloud computing.

What trends we”ll be covering? Here, take a look:

1. Kubernetes

2. AI in Cloud Computing

3. Hybrid and Multi-Cloud

4. Edge Computing

Kubernetes

Around mid-2014, Kubernetes was announced by Google. Since then, a lot has changed, Kubernetes also called K-8, has found its place among developers all around the world.

“Kubernetes is a portable, extensible, open source platform for managing containerized workloads and services, that facilitates both declarative configuration and automation.”

Kubernetes is an orchestration platform, created in Google lab to oversee containerized applications in various sort of situations, for example, physical, virtual, and cloud frameworks. It is an open-source framework that helps in making and overseeing containerization of use.

Let’s look at two topics before moving forward:

Immutable infrastructure: A infrastructure deployment method where any modification required is not permitted directly on the server. Here once servers are deployed, they are never modified. For all the changes or upgrades to install, you’ll need to build a new server from a base image, that have all your needed changes baked in. This allows for to replacement of old servers without any additional modification required.

Containers: A container is a standalone unit of software that offers a way to package code, runtime, system tools, system libraries, and configs altogether. Containers allow the application to behave the same every time no matter where it runs. Containers are lightweight, they don’t come bloated with redundant or unneeded software not wasting compute resources on background processes.

“If your app fits in a container, Kubernetes will deploy it”

Kubernetes Architecture:

Kubernetes cluster has a master node, which handles the API, schedules deployments, and manages the cluster overall, and worker nodes, which are responsible for container runtime, such as Docker, along with an agent that communicates with the master node.

Kubernetes Archituchere
Source: Docker Swarm and Kubernetes in Cloud Computing Environment

Etcd: All data relating to the Kubernetes cluster is stored in a key-value store called Etcd.

Scheduler: One of the fundamental elements of Kubernetes ace. It watches for new Kubernetes Pods with no assigned nodes and assigns them to a node for execution based on resources, policies, and ‘affinity’ specifications.

Controller Manager: This segment is responsible for the greater part of the gatherers that manage the condition of the bunch and plays out an errand. By and large, it tends to be considered as a daemon that keeps running in a non-terminating circle and is in charge of gathering and sending data to the API server. It progresses in the direction of getting the mutual condition of the group and after that making changes to bring the present status of the server to the ideal state.

Container Runtime: It uses containerization platforms such as docker which packs up the software building the application and allows it to run on different platforms.

Kubelet Service: It ensures that the necessary containers are running in a Kubernetes Pod.

Kubernetes Proxy Service : This is an intermediary benefit that keeps running on every hub and aids in making administrations accessible to the outer host. It assists in the distribution of demand to address holders and is capable of crude load adjustment. It makes certain that the systems administration condition is predictable and open and in the meantime, it is separated also. It controls units on privileged insights, volumes, hub, making new compartments’ well-being registration, and so forth.

Artificial Intelligence in Cloud

AI in cloud are set of tools that is in the cloud and can be called upon across a network as a service.

MKL Structure

AI is a set of functions to realize learning behavior via classification, regression, reasoning, planning, knowledge representation, search, or any other types of functions based on data analysis and perceived information from the data.

Understanding MKLs

One illustration of MKLs is giving in above figure where each step can be described as follows:

  1. Monitoring an environment via sensor devices, and measurement tools, and collecting data during the time window. SDIs provide the data gathering and storage facility. The source domain is a place where data is gathered from. e.g., highways in the smart cities via sensors or cameras.
  2. Analysis of data based on the different functions in an AI context (AI engines). AI engines are developed to find a solution for a given use case, which may encompass:
  • Data preparations such as filtering, de-noising, normalization, and de-normalization
  • Classification, segmentation, association, regression, anomaly detection, prediction, inference engine, or semantic reasoner are examples of knowledge generation engines
  • Decision support or decision-making engines that produce the desired solution, optimization tools, and reinforcement learning are all examples of decision support or decision-making engines. guided learning, unsupervised learning, and reinforcement learning are all options for AI engines

3. Planning and policy based on results of Step 2 including translation of the results of Step 2 to parameters understandable by the system, e.g., adjusting traffic lights in a smart city, and transmitting power allocation in 5G. Scheduling actions based on the results of Step 2 can be handled here.

4. Executing the action, which can be carried out independently or with the assistance of a person. A collection of nodes in the target domain should carry out the actions, such as a group of 5G users who should adjust their transmit power or a group of robots who should change their states. These phases can be continued until the learning process has reached a point of convergence (in the machine learning process). Alternatively, a training phase can be deployed in an offline manner to evaluate the outputs of the AI process.

Until now we have talked about how AI on the cloud works, now let’s see why and how it will be helpful for the business. As we have to see above how the flow takes place, using the flow may companies where rewarded as it has human-like qualities, such as reasoning, thinking and learning which can be helpful for data analysis(one of the key features). Mainly AI is more commonly used as a marketing tactic by service operators.

Benefits:

1. Transparency

2. Usability

3. Scalability

Types of AI as a Service Platform:

1. Bots

2. APIs

3. Machine Learning

4. Data Labeling

5. Data Classification

Hybrid Cloud and Multi-Cloud infrastructure

A combination of the different environments forms a hybrid cloud. Hybrid cloud concepts are adopted across the world because almost no one today relies on the public cloud. The most common hybrid cloud example is a combination of public and private clouds such as on-premises data centers and public cloud-like google cloud

“Multi-cloud” is the integration and combination of multiple public clouds. A business may use one public cloud as a database, one as Paas, one for user authentication, and so on. Multi-cloud deployment having on an on-premise data center or a private cloud, then the cloud deployment can be considered as a hybrid cloud. The major difference is a hybrid cloud infrastructure blends two or more different types of clouds, while multi-cloud blends different clouds of the same type.

So combining these two approaches might give an impression that service is getting compromised between approaches but Hybrid services are not about a compromise between approaches, instead, they are about combining their strengths.

Data that requires frequent access by the user should be kept on public servers while sensitive data that needs to be updated and accessed once in a while should be moved to private servers also keeping the access monitored. A well-integrated and balanced hybrid strategy gives businesses the best of both worlds.

Hybrid and Multi-cloud

Areas where Hybrid cloud and Multi cloud can be beneficial:

  1. A hybrid cloud is one that combines private and public clouds. Companies in areas where data security is managed by industry norms will benefit from a hybrid cloud implementation. A corporation can, for example, put critical data in a private cloud and low-risk data in a public cloud. Companies or departments with data-intensive tasks, such as media and entertainment, can benefit from this technique. They may have rapid access to huge media files and store data that doesn’t need to be accessed as frequently, such as archives and backups, by using high-speed on-premises technology.
  2. Multi-cloud is a technique that combines two or more public clouds. This method is ideal for companies who wish to prevent vendor lock-in or achieve data redundancy in the event of a fail-over. During an outage, the backup cloud provider can keep things running smoothly.

How to choose a cloud?

  1. Low Latency needs: For low latency hybrid cloud solution may be best.
  2. Geographical Consideration: It company has offices in multiple location and countries has residency regulation multi cloud strategy can be helpful.
  3. Regulatory concerns: as its mentioned above for regulatory data use hybrid cloud.
  4. Cost management: Pay special attention to price tiers at the start, and inquire about the provider’s tools, reports, and other resources for keeping expenses under control.

Edge Computing

An ‘edge’ is a type of data processing in which data is disseminated among decentralized servers yet a small quantity of data is maintained on local networks or storage. Edge computing is a new paradigm in which an edge server’s resources are placed at the Internet’s edge, close to mobile devices, sensors, end-users, and the growing Internet of Things. Edge is a subset of cloud computing where it doesn’t need to send data to data centers for processing, the requests are answered quickly. The main objective of cloud and edge computing is to avoid keeping data in a single area and instead disperse it across numerous sites. The main difference is that cloud computing prefers to store data in remote data centers, whereas edge computing still uses local discs to some extent. Data can be deployed offline by local devices using less bandwidth usage.

Source Trantor INC

Benefits:

Decreased latency

When dealing with repetitive requests from AI and Machine Learning applications, cloud computing solutions are typically too slow. If the workload comprises real-time forecasting, analytics, and data processing, cloud storage will not provide fast and smooth performance.

The data must be registered with the center and can only be deployed with the center’s consent. Edge computing, on the other hand, processes data locally, minimizing the quantity of data that must be saved remotely.

Performing in distributed environments

Performing in a dispersed setting A network’s edge network connects all of the network’s points, from one edge to the next. It’s a tried-and-true method of transferring data directly from one remote storage to another without the need for data centers. The data can quickly reach the opposite ends of the local network, and it can do so considerably more quickly than a cloud option.

Limited and unstable network connectivity

Edge computing allows processing data in the local storage with in-house processors. This is useful in transportation: for example, trains that utilize the Internet of Things for communication don’t always have a stable connection during traveling. When they are offline, they may access data from local networks and synchronize activities with data centers once the link is restored.

Edge computing is the balance between typical offline data storage, in which data does not leave the local network, and a completely decentralized approach, in which nothing is saved locally.

Sensitive data can be kept remotely, while data that must be retrieved immediately regardless of the quality of the Internet connection can be accessed on the network’s edges.

Keeping sensitive private data in local storage

Some businesses choose not to share sensitive personal information with third-party data storage providers. The security of information is thus dependent on the reliability of suppliers rather than the organization itself. If you don’t have access to a reliable cloud storage provider, edge processing offers a middle ground between traditional centralized and decentralized storage.

Companies who don’t trust third-party suppliers with secret information might transmit sensitive data to the network’s edge. This gives businesses complete control over security and accessibility.

References:

  1. An Overview on Edge Computing Research | IEEE Journals & Magazine | IEEE Xplore
  2. Kubernetes Documentation | Kubernetes
  3. What Are Containers in Cloud Computing? — Intel
  4. Artificial Intelligence as a Service (AI-aaS) on Software-Defined Infrastructure | IEEE Conference Publication | IEEE Xplore
  5. Multi-cloud vs. hybrid cloud: What’s the difference? | Cloudflare
  6. Hybrid Cloud vs. Multi-Cloud (vmware.com)
  7. Docker Swarm and Kubernetes in Cloud Computing Environment | IEEE Conference Publication | IEEE Xplore

Published by: SNEHAL VARNE Shrutak Shende Aadvija Medhekar TUSHAR RANJAN

--

--