Concepts of cloud
Cloud computing is the delivery of computing services—including servers, storage, databases, networking, software, analytics, and intelligence—over the Internet ("the cloud"). This allows for faster innovation, flexible resources, and economies of scale.
Docker is a software platform that allows you to build, test, and deploy applications quickly. Docker packages software into standardized units called containers that have everything the software needs to run including libraries, system tools, code, and runtime. Using Docker, you can quickly deploy and scale applications into any environment and know your code will run.
Applications run in isolated environments, which makes them portable and easy to manage.
Docker uses images as blueprints for containers. A Dockerfile is a script that contains instructions to build an image.
A cloud-based registry where users can share and distribute Docker image
Docker works by providing a standard way to run your code. Docker is an operating system for containers. Similar to how a virtual machine virtualizes .containers virtualize the operating system of a server. Docker is installed on each server and provides simple commands you can use to build, start, or stop containers.
A Docker image is composed of multiple layers stacked on top of each other. Each layer represents a specific modification to the file system (inside the container), such as adding a new file or modifying an existing one. Once a layer is created, it becomes immutable, meaning it can't be changed.
To install Docker on a remote server using PuTTY, you'll first need to ensure you have access to a Linux server (like Ubuntu, CentOS, etc.) via SSH. Here's a step-by-step guide:
Before installing Docker, it’s a good idea to update the package index:
sudo apt updateFor Ubuntu, install the required packages:
sudo apt install apt-transport-https ca-certificates curl software-properties-commonFor CentOS, run:
sudo yum install -y yum-utilsFor Ubuntu:
curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo apt-key add -For CentOS:
sudo rpm --import https://download.docker.com/linux/centos/gpgFor Ubuntu:
sudo add-apt-repository "deb [arch=amd64] https://download.docker.com/linux/ubuntu $(lsb_release -cs) stable"For CentOS:
sudo yum-config-manager --add-repo https://download.docker.com/linux/centos/docker-ce.repoFor Ubuntu:
sudo apt updatesudo apt install docker-ceFor CentOS:
sudo yum install docker-ceEnable and start the Docker service:
sudo systemctl start dockersudo systemctl enable dockerCheck if Docker is running:
sudo systemctl status dockerYou can also run a test container:
sudo docker run hello-worldIf you want to run Docker commands without sudo, add your user to the
Docker group:
sudo usermod -aG docker $USERAfter running this command, log out and back in for the changes to take effect.
You’ve successfully installed Docker using PuTTY! If you have any questions or run into issues, feel free to ask.
A container is a standard unit of software that packages up code and all its dependencies so the application runs quickly and reliably from one computing environment to another. A Docker container image is a lightweight, standalone, executable package of software that includes everything needed to run an application: code, runtime, system tools, system libraries and settings.
(in other and simple words )
Containers are an abstraction at the app layer that packages code and dependencies together. Multiple containers can run on the same machine and share the OS kernel with other containers, each running as isolated processes in user space. Containers take up less space than VMs (container images are typically tens of MBs in size), can handle more applications and require fewer VMs and Operating systems
NGINX is a high-performance web server and reverse proxy server that is widely used for serving web applications, handling HTTP and HTTPS requests, and load balancing traffic. It is known for its speed, efficiency, and ability to handle a large number of concurrent connections with low resource usage.
In this tutorial, we’ll show you how to install NGINX on Linux.
sudo apt-get updateNext, run this command:
sudo apt-get install nginxsudo ufw enablenginx -vsudo ufw statusAfter running this command, you should see the following:
status: active
sudo systemctl status nginxFirst open aws search EC2 then Launch Instance and there select keypair in putty then download it
after that Launch it and run putty and paste public id on HOST NAME and open that downloaded key pair for putty in SSH then Auth then Credentials and open there
sudo apt updatesudo apt install apache2sudo sucd /var/www/html/##then
lsrm index.htmlrm means remove commandvi index.htmlnow copy your public ip and paste it on browser you will see the texts written by you (by using html above)
First open aws search EC2 then Launch Instance and there select keypair in putty then download it
after that edit network setting and click on add security group rule and select TCP,UDP,ALL TRAFFIC AND SELECT EVERYWHERE SOURCE TYPE IN THEM then Launch it and run putty and paste public id on HOST NAME and open that downloaded key pair for putty in SSH
(SSH (Secure Shell) is a way to securely connect to another computer over a network. It's like a safe tunnel that allows you to control a remote computer and transfer files to it, without anyone else being able to listen in or interfere with the connection.)
then Auth then Credentials and open there
after that run it and write username as ubuntu as selected os and then type following commands
curl -sL https://github.com/ShubhamTatvamasi/docker-install/raw/master/docker-install.sh | bashnewgrp dockerdocker psdocker --versiondocker pull nginxYou can download Nginx from a pre-built Docker image, with a default Nginx configuration, by above command. This downloads all the necessary components for the container.
docker run --name docker-nginx -p 80:80 nginxThe --name flag is how you specify the name of the container. If left blank, a generated name like nostalgic_hopper will be assigned.
-p specifies the port you are exposing in the format of -p local-machine-port:internal-container-port. In this case, you are mapping port :80 in the container to port :80 on the server.
docker ps -adocker rm docker-nginxdocker run --name docker-nginx -p 80:80 -d nginxCreate a new, detached Nginx container,By attaching the -d flag, you are running this container in the background.
docker psdocker stop docker-nginxdocker rm docker-nginxmkdir -p ~/docker-nginx/htmlcd ~/docker-nginx/htmlvi index.htmldocker run --name docker-nginx -p 80:80 -d -v ~/docker-nginx/html:/usr/share/nginx/html nginxLinking the Container to the Local Filesystem
cd ~/docker-nginxdocker cp docker-nginx:/etc/nginx/conf.d/default.conf default.confdocker stop docker-nginxdocker rm docker-nginxdocker run --name docker-nginx -p 80:80 -v ~/docker-nginx/html:/usr/share/nginx/html -v ~/docker-nginx/default.conf:/etc/nginx/conf.d/default.conf -d nginxdocker restart docker-nginxIn cloud computing, orchestration is the process of coordinating and automating the management of applications, tools, and infrastructure across multiple clouds
Kubernetes (sometimes shortened to K8s with the 8 standing for the number of letters between the “K” and the “s”) is an open source system to deploy, scale, and manage containerized applications anywhere.
Kubernetes has built-in commands to handle a lot of the heavy lifting that goes into application management, allowing you to automate day-to-day operations. You can make sure applications are always running the way you intended them to run.
Minikube is a tool that sets up a Kubernetes environment on a local PC or laptop. This tool provides an easy means of creating a local Kubernetes environment on any Linux, Mac, or Windows system, where you can experiment with and test Kubernetes deployments.
##First open aws search EC2 ,and download your passkey then Launch Instance and select 22.04 AMI then select t2.xlarge instance type then select keypair then configure storage to 30 GB then enable all traffic in network and Launch.
##Launch PuTTY and add the IP address from instance and add key pair file and open the PuTTY terminal.
curl -sL https://github.com/ShubhamTatvamasi/docker-install/raw/master/docker-install.sh | bashsudo usermod -aG docker $USERnewgrp dockersudo snap install kubectl --classickubectl version --clientcurl -LO https://storage.googleapis.com/minikube/releases/latest/minikube-linux-amd64sudo install minikube-linux-amd64 /usr/local/bin/minikubeminikube versionminikube start --driver=dockerminikube start --driver=docker --forceminikube statuskubectl cluster-infokubectl config viewkubectl get nodeskubectl get podskubectl create deployment nginx-web --image=nginxkubectl expose deployment nginx-web --type NodePort --port=80kubectl get deployment,pod,svc
minikube addons listIt will display all addonsminikube addons enable dashboardminikube addons enable ingressminikube dashboard --url**kubectl proxy --address='0.0.0.0' --disable-filter=true &http://server_ip:8001/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/#/workloads?namespace=defaultIt is a technology that allows you to run multiple operating systems on a single physical machine. It turns the Linux kernel into a hypervisor, enabling virtual machines (VMs) to operate as if they were separate computers.
OpenStack is an open-source platform that lets you create and manage cloud computing services. It allows users to control computing power, storage, and networking in a data center through a web interface. Essentially, it helps organizations build their own private or public clouds, making it easier to deploy and manage applications.
QEMU is an open-source emulator and virtualization tool that allows you to run different operating systems on a host machine.
Amazon Virtual Private Cloud (VPC) is a virtual network that allows users to launch AWS resources in a logically isolated environment. It's a foundational service of AWS that gives users complete control over their virtual networking environment.
•EC2 is a virtual server that you can run your software on. VPC is a virtual network that you use to connect your virtual servers, and other resources.
Go to VPC and create a VPC then we have to create 4 subnets , where 2 subnets are private and other two are public .
creating target group with port 8000
creating load balancer
( below image shows this )
• connects a device to network.
• has a Mac address by manufacturer
• second layer device
•it is a very multiport network bridge that uses MAC address to forward data
•link layer device
•it is a device that forwards data between computer networks
• 3rd layer device
•network device that used to connect multiple computers in a network
• all information send to the hub is automatically send to each port to every device
(WORKS ON LAYER 2 -- ethernet frame )
used to create a user space network bridge.
( WORKS ON LAYER 3 -- IP packets )
create a tunnel network to reach another network .
it is a process in which one or more local IP addresses are translated into one or more global IP addresses and vice versa
these are pair of virtual network interfaces that are used to connect network namespaces together.
it is a set of libraries and drivers that accelerates packet processing and their ability to create packet forwarders without the need of costly custom switches and routers
It allows one device to connect to network
it is a new programmable processor that helps move data around data centres. it ensures right data goes to right place in right format quickly .
it is used with hypervisors to interconnect virtual machines within a host and between different hosts accross networks.
free open source machine it can run various guest operating systems (OS's ) and architecture on a single host system.
it is like a container that holds everything your application needs to run including the code, libraries.
it is like a manager for containers .it helps to deploy, scale and manage a group of containers making sure they run smoothly.
technology that allows you to run code written in different programming languages. It is a way to build high speed, responsive web applications that can handle data and communicate over networks.
security system that controls incoming and outgoing network traffic based on pre- determined security rules.
It is tunneling report that tunnel Ethernet traffic (layer 2) over an IP network(layer 3).
it is a device that's responsible for encapsulating and de-capsulating layer 2 traffic.
it is a kernel module that behaves like a network switch .It is usually used for forwarding packages on routes or gateways or between virtual machines.
it is a tool for high speed package generation and testing. It in the Linux kernel.
feature of Linux kernel that provides a way to create isolated network environment.
TAP is often used to connect VM or containers to a physical network
( typer 1 hypervisor)
it is responsible for setting up the network ( assigning IP address, create network bridge) for containers ,enabling communication between containers and outside world
examples VLAN , IPvLAN , CALICO , FLANNEL , VMware. etc
it acts as a layer that allows containers to send and receive data seamlessly accross various hosts .
it runs a small single binary agent on every host. this networking tool gives every host an IP subnet.
when we send email or web page the data does not travel as single continuous stream instead is broken down into smaller chunks called packets .
• it is a local action of moving and arriving packets from a router's input to appropriate router output link.
• global process of determining the full paths packets take from source to destination
set of rules that determine how the packet is to be transferred and received and in which format .
it ensures reliable order delivery of data between applications . It handles things like breaking data into packets
responsible for addressing and routing packets across the internet.
it is a protocol that powers the world wide web defining how messages are formatted and transmitted between web browsers and servers.
provide reliable transmission of data.
provides faster but less reliable transmission of data.
it is the sequence of packets that travel from a source computer to a destination.
•Packets are segments of data that are routed through a network of interconnected devices, such as switches and routers, before reaching their destination.
--below images shows network setup for packet flow
















































